00:00:00.001 Started by upstream project "autotest-per-patch" build number 132818 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.062 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.062 The recommended git tool is: git 00:00:00.063 using credential 00000000-0000-0000-0000-000000000002 00:00:00.064 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.117 Fetching changes from the remote Git repository 00:00:00.119 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.188 Using shallow fetch with depth 1 00:00:00.188 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.188 > git --version # timeout=10 00:00:00.250 > git --version # 'git version 2.39.2' 00:00:00.250 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.292 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.292 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.806 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.816 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.826 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.826 > git config core.sparsecheckout # timeout=10 00:00:04.837 > git read-tree -mu HEAD # timeout=10 00:00:04.853 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.870 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.870 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.969 [Pipeline] Start of Pipeline 00:00:04.980 [Pipeline] library 00:00:04.981 Loading library shm_lib@master 00:00:04.981 Library shm_lib@master is cached. Copying from home. 00:00:04.993 [Pipeline] node 00:00:05.004 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:05.005 [Pipeline] { 00:00:05.011 [Pipeline] catchError 00:00:05.012 [Pipeline] { 00:00:05.021 [Pipeline] wrap 00:00:05.027 [Pipeline] { 00:00:05.032 [Pipeline] stage 00:00:05.033 [Pipeline] { (Prologue) 00:00:05.043 [Pipeline] echo 00:00:05.044 Node: VM-host-SM38 00:00:05.048 [Pipeline] cleanWs 00:00:05.057 [WS-CLEANUP] Deleting project workspace... 00:00:05.057 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.064 [WS-CLEANUP] done 00:00:05.290 [Pipeline] setCustomBuildProperty 00:00:05.389 [Pipeline] httpRequest 00:00:05.873 [Pipeline] echo 00:00:05.874 Sorcerer 10.211.164.112 is alive 00:00:05.879 [Pipeline] retry 00:00:05.881 [Pipeline] { 00:00:05.888 [Pipeline] httpRequest 00:00:05.893 HttpMethod: GET 00:00:05.894 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.894 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.905 Response Code: HTTP/1.1 200 OK 00:00:05.906 Success: Status code 200 is in the accepted range: 200,404 00:00:05.906 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.495 [Pipeline] } 00:00:08.513 [Pipeline] // retry 00:00:08.520 [Pipeline] sh 00:00:08.812 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.824 [Pipeline] httpRequest 00:00:09.835 [Pipeline] echo 00:00:09.837 Sorcerer 10.211.164.112 is alive 00:00:09.844 [Pipeline] retry 00:00:09.846 [Pipeline] { 00:00:09.857 [Pipeline] httpRequest 00:00:09.862 HttpMethod: GET 00:00:09.863 URL: http://10.211.164.112/packages/spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:00:09.864 Sending request to url: http://10.211.164.112/packages/spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:00:09.865 Response Code: HTTP/1.1 200 OK 00:00:09.866 Success: Status code 200 is in the accepted range: 200,404 00:00:09.866 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:00:34.884 [Pipeline] } 00:00:34.902 [Pipeline] // retry 00:00:34.909 [Pipeline] sh 00:00:35.197 + tar --no-same-owner -xf spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:00:38.514 [Pipeline] sh 00:00:38.798 + git -C spdk log --oneline -n5 00:00:38.798 86d35c37a bdev: simplify bdev_reset_freeze_channel 00:00:38.798 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:00:38.798 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:00:38.798 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:00:38.798 0ea9ac02f accel/mlx5: Create pool of UMRs 00:00:38.818 [Pipeline] writeFile 00:00:38.835 [Pipeline] sh 00:00:39.121 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:39.134 [Pipeline] sh 00:00:39.423 + cat autorun-spdk.conf 00:00:39.423 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.423 SPDK_TEST_NVME=1 00:00:39.423 SPDK_TEST_FTL=1 00:00:39.423 SPDK_TEST_ISAL=1 00:00:39.423 SPDK_RUN_ASAN=1 00:00:39.423 SPDK_RUN_UBSAN=1 00:00:39.423 SPDK_TEST_XNVME=1 00:00:39.423 SPDK_TEST_NVME_FDP=1 00:00:39.423 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:39.431 RUN_NIGHTLY=0 00:00:39.433 [Pipeline] } 00:00:39.448 [Pipeline] // stage 00:00:39.462 [Pipeline] stage 00:00:39.464 [Pipeline] { (Run VM) 00:00:39.477 [Pipeline] sh 00:00:39.763 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:39.763 + echo 'Start stage prepare_nvme.sh' 00:00:39.763 Start stage prepare_nvme.sh 00:00:39.763 + [[ -n 6 ]] 00:00:39.763 + disk_prefix=ex6 00:00:39.763 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:00:39.763 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:00:39.763 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:00:39.763 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.763 ++ SPDK_TEST_NVME=1 00:00:39.763 ++ SPDK_TEST_FTL=1 00:00:39.763 ++ SPDK_TEST_ISAL=1 00:00:39.763 ++ SPDK_RUN_ASAN=1 00:00:39.763 ++ SPDK_RUN_UBSAN=1 00:00:39.763 ++ SPDK_TEST_XNVME=1 00:00:39.763 ++ SPDK_TEST_NVME_FDP=1 00:00:39.763 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:39.763 ++ RUN_NIGHTLY=0 00:00:39.763 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:00:39.763 + nvme_files=() 00:00:39.763 + declare -A nvme_files 00:00:39.763 + backend_dir=/var/lib/libvirt/images/backends 00:00:39.763 + nvme_files['nvme.img']=5G 00:00:39.763 + nvme_files['nvme-cmb.img']=5G 00:00:39.763 + nvme_files['nvme-multi0.img']=4G 00:00:39.763 + nvme_files['nvme-multi1.img']=4G 00:00:39.763 + nvme_files['nvme-multi2.img']=4G 00:00:39.763 + nvme_files['nvme-openstack.img']=8G 00:00:39.763 + nvme_files['nvme-zns.img']=5G 00:00:39.763 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:39.763 + (( SPDK_TEST_FTL == 1 )) 00:00:39.763 + nvme_files["nvme-ftl.img"]=6G 00:00:39.763 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:39.763 + nvme_files["nvme-fdp.img"]=1G 00:00:39.763 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:39.763 + for nvme in "${!nvme_files[@]}" 00:00:39.763 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:00:39.763 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:39.763 + for nvme in "${!nvme_files[@]}" 00:00:39.763 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:00:40.024 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:40.024 + for nvme in "${!nvme_files[@]}" 00:00:40.024 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:00:40.024 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:40.024 + for nvme in "${!nvme_files[@]}" 00:00:40.024 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:00:40.286 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:40.286 + for nvme in "${!nvme_files[@]}" 00:00:40.286 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:00:40.286 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:40.286 + for nvme in "${!nvme_files[@]}" 00:00:40.286 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:00:40.286 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:40.286 + for nvme in "${!nvme_files[@]}" 00:00:40.286 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:00:40.286 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:40.286 + for nvme in "${!nvme_files[@]}" 00:00:40.286 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:00:40.286 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:40.286 + for nvme in "${!nvme_files[@]}" 00:00:40.286 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:00:40.548 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:40.548 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:00:40.548 + echo 'End stage prepare_nvme.sh' 00:00:40.548 End stage prepare_nvme.sh 00:00:40.561 [Pipeline] sh 00:00:40.846 + DISTRO=fedora39 00:00:40.846 + CPUS=10 00:00:40.846 + RAM=12288 00:00:40.846 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:40.846 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:40.846 00:00:40.846 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:00:40.846 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:00:40.846 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:00:40.846 HELP=0 00:00:40.846 DRY_RUN=0 00:00:40.846 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:00:40.846 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:40.846 NVME_AUTO_CREATE=0 00:00:40.846 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:00:40.846 NVME_CMB=,,,, 00:00:40.846 NVME_PMR=,,,, 00:00:40.846 NVME_ZNS=,,,, 00:00:40.846 NVME_MS=true,,,, 00:00:40.846 NVME_FDP=,,,on, 00:00:40.846 SPDK_VAGRANT_DISTRO=fedora39 00:00:40.846 SPDK_VAGRANT_VMCPU=10 00:00:40.846 SPDK_VAGRANT_VMRAM=12288 00:00:40.846 SPDK_VAGRANT_PROVIDER=libvirt 00:00:40.846 SPDK_VAGRANT_HTTP_PROXY= 00:00:40.846 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:40.846 SPDK_OPENSTACK_NETWORK=0 00:00:40.846 VAGRANT_PACKAGE_BOX=0 00:00:40.846 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:40.846 FORCE_DISTRO=true 00:00:40.846 VAGRANT_BOX_VERSION= 00:00:40.846 EXTRA_VAGRANTFILES= 00:00:40.846 NIC_MODEL=e1000 00:00:40.846 00:00:40.846 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:00:40.846 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:00:43.440 Bringing machine 'default' up with 'libvirt' provider... 00:00:43.703 ==> default: Creating image (snapshot of base box volume). 00:00:44.275 ==> default: Creating domain with the following settings... 00:00:44.275 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733798917_ec1ad54d57957ebab7b5 00:00:44.275 ==> default: -- Domain type: kvm 00:00:44.275 ==> default: -- Cpus: 10 00:00:44.275 ==> default: -- Feature: acpi 00:00:44.275 ==> default: -- Feature: apic 00:00:44.275 ==> default: -- Feature: pae 00:00:44.275 ==> default: -- Memory: 12288M 00:00:44.275 ==> default: -- Memory Backing: hugepages: 00:00:44.275 ==> default: -- Management MAC: 00:00:44.275 ==> default: -- Loader: 00:00:44.275 ==> default: -- Nvram: 00:00:44.275 ==> default: -- Base box: spdk/fedora39 00:00:44.275 ==> default: -- Storage pool: default 00:00:44.275 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733798917_ec1ad54d57957ebab7b5.img (20G) 00:00:44.275 ==> default: -- Volume Cache: default 00:00:44.275 ==> default: -- Kernel: 00:00:44.275 ==> default: -- Initrd: 00:00:44.275 ==> default: -- Graphics Type: vnc 00:00:44.275 ==> default: -- Graphics Port: -1 00:00:44.275 ==> default: -- Graphics IP: 127.0.0.1 00:00:44.275 ==> default: -- Graphics Password: Not defined 00:00:44.275 ==> default: -- Video Type: cirrus 00:00:44.275 ==> default: -- Video VRAM: 9216 00:00:44.275 ==> default: -- Sound Type: 00:00:44.275 ==> default: -- Keymap: en-us 00:00:44.275 ==> default: -- TPM Path: 00:00:44.275 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:44.275 ==> default: -- Command line args: 00:00:44.275 ==> default: -> value=-device, 00:00:44.275 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:44.275 ==> default: -> value=-drive, 00:00:44.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:44.275 ==> default: -> value=-device, 00:00:44.275 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:44.275 ==> default: -> value=-device, 00:00:44.275 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:44.275 ==> default: -> value=-drive, 00:00:44.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:00:44.275 ==> default: -> value=-device, 00:00:44.275 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.275 ==> default: -> value=-device, 00:00:44.275 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:44.275 ==> default: -> value=-drive, 00:00:44.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:44.275 ==> default: -> value=-device, 00:00:44.275 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.275 ==> default: -> value=-drive, 00:00:44.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:44.275 ==> default: -> value=-device, 00:00:44.275 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.275 ==> default: -> value=-drive, 00:00:44.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:44.275 ==> default: -> value=-device, 00:00:44.275 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.275 ==> default: -> value=-device, 00:00:44.275 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:44.275 ==> default: -> value=-device, 00:00:44.275 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:44.275 ==> default: -> value=-drive, 00:00:44.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:44.275 ==> default: -> value=-device, 00:00:44.275 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.275 ==> default: Creating shared folders metadata... 00:00:44.275 ==> default: Starting domain. 00:00:46.188 ==> default: Waiting for domain to get an IP address... 00:01:08.160 ==> default: Waiting for SSH to become available... 00:01:08.160 ==> default: Configuring and enabling network interfaces... 00:01:10.733 default: SSH address: 192.168.121.211:22 00:01:10.733 default: SSH username: vagrant 00:01:10.733 default: SSH auth method: private key 00:01:12.631 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:19.186 ==> default: Mounting SSHFS shared folder... 00:01:20.121 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:20.121 ==> default: Checking Mount.. 00:01:21.060 ==> default: Folder Successfully Mounted! 00:01:21.060 00:01:21.060 SUCCESS! 00:01:21.060 00:01:21.060 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:21.060 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:21.060 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:21.060 00:01:21.067 [Pipeline] } 00:01:21.080 [Pipeline] // stage 00:01:21.088 [Pipeline] dir 00:01:21.088 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:01:21.090 [Pipeline] { 00:01:21.101 [Pipeline] catchError 00:01:21.103 [Pipeline] { 00:01:21.114 [Pipeline] sh 00:01:21.395 + vagrant ssh-config --host vagrant 00:01:21.395 + sed -ne '/^Host/,$p' 00:01:21.395 + tee ssh_conf 00:01:23.929 Host vagrant 00:01:23.929 HostName 192.168.121.211 00:01:23.929 User vagrant 00:01:23.929 Port 22 00:01:23.929 UserKnownHostsFile /dev/null 00:01:23.929 StrictHostKeyChecking no 00:01:23.929 PasswordAuthentication no 00:01:23.929 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:23.929 IdentitiesOnly yes 00:01:23.929 LogLevel FATAL 00:01:23.929 ForwardAgent yes 00:01:23.929 ForwardX11 yes 00:01:23.929 00:01:23.941 [Pipeline] withEnv 00:01:23.943 [Pipeline] { 00:01:23.957 [Pipeline] sh 00:01:24.234 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:24.235 source /etc/os-release 00:01:24.235 [[ -e /image.version ]] && img=$(< /image.version) 00:01:24.235 # Minimal, systemd-like check. 00:01:24.235 if [[ -e /.dockerenv ]]; then 00:01:24.235 # Clear garbage from the node'\''s name: 00:01:24.235 # agt-er_autotest_547-896 -> autotest_547-896 00:01:24.235 # $HOSTNAME is the actual container id 00:01:24.235 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:24.235 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:24.235 # We can assume this is a mount from a host where container is running, 00:01:24.235 # so fetch its hostname to easily identify the target swarm worker. 00:01:24.235 container="$(< /etc/hostname) ($agent)" 00:01:24.235 else 00:01:24.235 # Fallback 00:01:24.235 container=$agent 00:01:24.235 fi 00:01:24.235 fi 00:01:24.235 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:24.235 ' 00:01:24.502 [Pipeline] } 00:01:24.516 [Pipeline] // withEnv 00:01:24.524 [Pipeline] setCustomBuildProperty 00:01:24.538 [Pipeline] stage 00:01:24.540 [Pipeline] { (Tests) 00:01:24.555 [Pipeline] sh 00:01:24.866 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:24.877 [Pipeline] sh 00:01:25.155 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:25.166 [Pipeline] timeout 00:01:25.167 Timeout set to expire in 50 min 00:01:25.168 [Pipeline] { 00:01:25.181 [Pipeline] sh 00:01:25.458 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:25.716 HEAD is now at 86d35c37a bdev: simplify bdev_reset_freeze_channel 00:01:25.726 [Pipeline] sh 00:01:26.000 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:26.013 [Pipeline] sh 00:01:26.290 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:26.304 [Pipeline] sh 00:01:26.582 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:26.582 ++ readlink -f spdk_repo 00:01:26.582 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:26.582 + [[ -n /home/vagrant/spdk_repo ]] 00:01:26.582 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:26.582 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:26.582 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:26.582 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:26.582 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:26.582 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:26.582 + cd /home/vagrant/spdk_repo 00:01:26.582 + source /etc/os-release 00:01:26.582 ++ NAME='Fedora Linux' 00:01:26.582 ++ VERSION='39 (Cloud Edition)' 00:01:26.582 ++ ID=fedora 00:01:26.582 ++ VERSION_ID=39 00:01:26.582 ++ VERSION_CODENAME= 00:01:26.582 ++ PLATFORM_ID=platform:f39 00:01:26.582 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:26.582 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.582 ++ LOGO=fedora-logo-icon 00:01:26.582 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:26.582 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.582 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:26.582 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.582 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.582 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.582 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:26.582 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.582 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:26.582 ++ SUPPORT_END=2024-11-12 00:01:26.582 ++ VARIANT='Cloud Edition' 00:01:26.582 ++ VARIANT_ID=cloud 00:01:26.582 + uname -a 00:01:26.582 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:26.582 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:27.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:27.147 Hugepages 00:01:27.147 node hugesize free / total 00:01:27.147 node0 1048576kB 0 / 0 00:01:27.147 node0 2048kB 0 / 0 00:01:27.147 00:01:27.147 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:27.147 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:27.405 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:27.405 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:27.405 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:27.405 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:27.405 + rm -f /tmp/spdk-ld-path 00:01:27.405 + source autorun-spdk.conf 00:01:27.405 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.405 ++ SPDK_TEST_NVME=1 00:01:27.405 ++ SPDK_TEST_FTL=1 00:01:27.405 ++ SPDK_TEST_ISAL=1 00:01:27.405 ++ SPDK_RUN_ASAN=1 00:01:27.405 ++ SPDK_RUN_UBSAN=1 00:01:27.405 ++ SPDK_TEST_XNVME=1 00:01:27.405 ++ SPDK_TEST_NVME_FDP=1 00:01:27.405 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.405 ++ RUN_NIGHTLY=0 00:01:27.405 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:27.405 + [[ -n '' ]] 00:01:27.405 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:27.405 + for M in /var/spdk/build-*-manifest.txt 00:01:27.405 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:27.405 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:27.405 + for M in /var/spdk/build-*-manifest.txt 00:01:27.405 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:27.405 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:27.405 + for M in /var/spdk/build-*-manifest.txt 00:01:27.405 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:27.405 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:27.405 ++ uname 00:01:27.405 + [[ Linux == \L\i\n\u\x ]] 00:01:27.405 + sudo dmesg -T 00:01:27.405 + sudo dmesg --clear 00:01:27.405 + dmesg_pid=5015 00:01:27.405 + sudo dmesg -Tw 00:01:27.405 + [[ Fedora Linux == FreeBSD ]] 00:01:27.405 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.405 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.405 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.405 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.405 + export FIO_BIN=/usr/src/fio-static/fio 00:01:27.405 + FIO_BIN=/usr/src/fio-static/fio 00:01:27.405 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.405 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.405 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.405 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.405 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.405 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.405 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.405 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.405 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:27.405 02:49:21 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:27.405 02:49:21 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:27.405 02:49:21 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.405 02:49:21 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:27.405 02:49:21 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:27.405 02:49:21 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:27.405 02:49:21 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:27.405 02:49:21 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:27.405 02:49:21 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:27.405 02:49:21 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:27.405 02:49:21 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.405 02:49:21 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:27.405 02:49:21 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:27.405 02:49:21 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:27.405 02:49:21 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:27.405 02:49:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:27.405 02:49:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:27.405 02:49:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.405 02:49:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.405 02:49:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.405 02:49:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.405 02:49:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.405 02:49:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.405 02:49:21 -- paths/export.sh@5 -- $ export PATH 00:01:27.405 02:49:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.405 02:49:21 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:27.405 02:49:21 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:27.405 02:49:21 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733798961.XXXXXX 00:01:27.405 02:49:21 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733798961.8rc4qU 00:01:27.405 02:49:21 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:27.405 02:49:21 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:27.405 02:49:21 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:27.405 02:49:21 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:27.405 02:49:21 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.405 02:49:21 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:27.405 02:49:21 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:27.405 02:49:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.663 02:49:21 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:27.663 02:49:21 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:27.663 02:49:21 -- pm/common@17 -- $ local monitor 00:01:27.663 02:49:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.663 02:49:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.663 02:49:21 -- pm/common@25 -- $ sleep 1 00:01:27.663 02:49:21 -- pm/common@21 -- $ date +%s 00:01:27.663 02:49:21 -- pm/common@21 -- $ date +%s 00:01:27.663 02:49:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733798961 00:01:27.663 02:49:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733798961 00:01:27.663 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733798961_collect-cpu-load.pm.log 00:01:27.663 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733798961_collect-vmstat.pm.log 00:01:28.597 02:49:22 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:28.597 02:49:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:28.597 02:49:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:28.597 02:49:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:28.597 02:49:22 -- spdk/autobuild.sh@16 -- $ date -u 00:01:28.597 Tue Dec 10 02:49:22 AM UTC 2024 00:01:28.597 02:49:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:28.597 v25.01-pre-312-g86d35c37a 00:01:28.597 02:49:22 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:28.597 02:49:22 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:28.597 02:49:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:28.597 02:49:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:28.597 02:49:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.597 ************************************ 00:01:28.597 START TEST asan 00:01:28.597 ************************************ 00:01:28.597 using asan 00:01:28.597 02:49:22 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:28.597 00:01:28.597 real 0m0.000s 00:01:28.597 user 0m0.000s 00:01:28.597 sys 0m0.000s 00:01:28.597 02:49:22 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:28.597 ************************************ 00:01:28.597 END TEST asan 00:01:28.597 ************************************ 00:01:28.597 02:49:22 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.597 02:49:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:28.597 02:49:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:28.597 02:49:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:28.597 02:49:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:28.597 02:49:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.597 ************************************ 00:01:28.597 START TEST ubsan 00:01:28.597 ************************************ 00:01:28.597 using ubsan 00:01:28.597 02:49:22 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:28.597 00:01:28.597 real 0m0.000s 00:01:28.597 user 0m0.000s 00:01:28.597 sys 0m0.000s 00:01:28.597 02:49:22 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:28.597 ************************************ 00:01:28.597 02:49:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.597 END TEST ubsan 00:01:28.597 ************************************ 00:01:28.597 02:49:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:28.597 02:49:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:28.597 02:49:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:28.597 02:49:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:28.597 02:49:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:28.597 02:49:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:28.597 02:49:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:28.597 02:49:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:28.597 02:49:22 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:28.597 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:28.597 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:29.166 Using 'verbs' RDMA provider 00:01:41.939 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:51.914 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:51.914 Creating mk/config.mk...done. 00:01:51.914 Creating mk/cc.flags.mk...done. 00:01:51.914 Type 'make' to build. 00:01:51.914 02:49:45 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:51.914 02:49:45 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:51.914 02:49:45 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:51.914 02:49:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.914 ************************************ 00:01:51.914 START TEST make 00:01:51.914 ************************************ 00:01:51.914 02:49:45 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:51.914 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:51.914 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:51.914 meson setup builddir \ 00:01:51.914 -Dwith-libaio=enabled \ 00:01:51.914 -Dwith-liburing=enabled \ 00:01:51.914 -Dwith-libvfn=disabled \ 00:01:51.914 -Dwith-spdk=disabled \ 00:01:51.914 -Dexamples=false \ 00:01:51.914 -Dtests=false \ 00:01:51.914 -Dtools=false && \ 00:01:51.914 meson compile -C builddir && \ 00:01:51.914 cd -) 00:01:51.914 make[1]: Nothing to be done for 'all'. 00:01:53.295 The Meson build system 00:01:53.295 Version: 1.5.0 00:01:53.295 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:53.295 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:53.295 Build type: native build 00:01:53.295 Project name: xnvme 00:01:53.295 Project version: 0.7.5 00:01:53.295 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:53.295 C linker for the host machine: cc ld.bfd 2.40-14 00:01:53.295 Host machine cpu family: x86_64 00:01:53.295 Host machine cpu: x86_64 00:01:53.295 Message: host_machine.system: linux 00:01:53.295 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:53.295 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:53.295 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:53.295 Run-time dependency threads found: YES 00:01:53.295 Has header "setupapi.h" : NO 00:01:53.295 Has header "linux/blkzoned.h" : YES 00:01:53.295 Has header "linux/blkzoned.h" : YES (cached) 00:01:53.295 Has header "libaio.h" : YES 00:01:53.295 Library aio found: YES 00:01:53.295 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:53.295 Run-time dependency liburing found: YES 2.2 00:01:53.295 Dependency libvfn skipped: feature with-libvfn disabled 00:01:53.295 Found CMake: /usr/bin/cmake (3.27.7) 00:01:53.295 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:01:53.295 Subproject spdk : skipped: feature with-spdk disabled 00:01:53.295 Run-time dependency appleframeworks found: NO (tried framework) 00:01:53.295 Run-time dependency appleframeworks found: NO (tried framework) 00:01:53.295 Library rt found: YES 00:01:53.295 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:53.295 Configuring xnvme_config.h using configuration 00:01:53.295 Configuring xnvme.spec using configuration 00:01:53.295 Run-time dependency bash-completion found: YES 2.11 00:01:53.295 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:53.295 Program cp found: YES (/usr/bin/cp) 00:01:53.295 Build targets in project: 3 00:01:53.295 00:01:53.295 xnvme 0.7.5 00:01:53.295 00:01:53.295 Subprojects 00:01:53.295 spdk : NO Feature 'with-spdk' disabled 00:01:53.295 00:01:53.295 User defined options 00:01:53.295 examples : false 00:01:53.295 tests : false 00:01:53.295 tools : false 00:01:53.295 with-libaio : enabled 00:01:53.295 with-liburing: enabled 00:01:53.295 with-libvfn : disabled 00:01:53.295 with-spdk : disabled 00:01:53.295 00:01:53.295 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.861 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:01:53.861 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:01:53.861 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:01:53.861 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:01:53.861 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:01:53.861 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:01:53.861 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:01:53.861 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:01:53.861 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:01:53.861 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:01:53.861 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:01:53.861 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:01:53.861 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:01:53.861 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:01:53.861 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:01:53.861 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:01:53.861 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:01:53.861 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:01:54.120 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:01:54.120 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:01:54.120 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:01:54.120 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:01:54.120 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:01:54.120 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:01:54.120 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:01:54.120 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:01:54.120 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:01:54.120 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:01:54.120 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:01:54.120 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:01:54.120 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:01:54.120 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:01:54.120 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:01:54.120 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:01:54.120 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:01:54.120 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:01:54.120 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:01:54.120 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:01:54.120 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:01:54.120 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:01:54.120 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:01:54.120 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:01:54.120 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:01:54.120 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:01:54.120 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:01:54.120 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:01:54.120 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:01:54.120 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:01:54.120 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:01:54.120 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:01:54.120 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:01:54.120 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:01:54.120 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:01:54.378 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:01:54.378 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:01:54.378 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:01:54.378 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:01:54.378 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:01:54.378 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:01:54.378 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:01:54.378 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:01:54.378 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:01:54.378 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:01:54.378 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:01:54.378 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:01:54.378 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:01:54.378 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:01:54.378 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:01:54.378 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:01:54.378 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:01:54.378 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:01:54.635 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:01:54.635 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:01:54.636 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:01:54.893 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:01:54.893 [75/76] Linking static target lib/libxnvme.a 00:01:54.893 [76/76] Linking target lib/libxnvme.so.0.7.5 00:01:54.893 INFO: autodetecting backend as ninja 00:01:54.893 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:54.893 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:02.998 The Meson build system 00:02:02.998 Version: 1.5.0 00:02:02.998 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:02.998 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:02.998 Build type: native build 00:02:02.998 Program cat found: YES (/usr/bin/cat) 00:02:02.998 Project name: DPDK 00:02:02.998 Project version: 24.03.0 00:02:02.998 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:02.998 C linker for the host machine: cc ld.bfd 2.40-14 00:02:02.998 Host machine cpu family: x86_64 00:02:02.998 Host machine cpu: x86_64 00:02:02.998 Message: ## Building in Developer Mode ## 00:02:02.998 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:02.998 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:02.998 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:02.998 Program python3 found: YES (/usr/bin/python3) 00:02:02.998 Program cat found: YES (/usr/bin/cat) 00:02:02.998 Compiler for C supports arguments -march=native: YES 00:02:02.998 Checking for size of "void *" : 8 00:02:02.998 Checking for size of "void *" : 8 (cached) 00:02:02.998 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:02.998 Library m found: YES 00:02:02.998 Library numa found: YES 00:02:02.998 Has header "numaif.h" : YES 00:02:02.998 Library fdt found: NO 00:02:02.998 Library execinfo found: NO 00:02:02.998 Has header "execinfo.h" : YES 00:02:02.998 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:02.998 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:02.998 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:02.998 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:02.998 Run-time dependency openssl found: YES 3.1.1 00:02:02.998 Run-time dependency libpcap found: YES 1.10.4 00:02:02.998 Has header "pcap.h" with dependency libpcap: YES 00:02:02.998 Compiler for C supports arguments -Wcast-qual: YES 00:02:02.998 Compiler for C supports arguments -Wdeprecated: YES 00:02:02.998 Compiler for C supports arguments -Wformat: YES 00:02:02.998 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:02.998 Compiler for C supports arguments -Wformat-security: NO 00:02:02.998 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.998 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:02.998 Compiler for C supports arguments -Wnested-externs: YES 00:02:02.998 Compiler for C supports arguments -Wold-style-definition: YES 00:02:02.998 Compiler for C supports arguments -Wpointer-arith: YES 00:02:02.998 Compiler for C supports arguments -Wsign-compare: YES 00:02:02.998 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:02.998 Compiler for C supports arguments -Wundef: YES 00:02:02.998 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.998 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:02.998 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:02.998 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.998 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:02.998 Program objdump found: YES (/usr/bin/objdump) 00:02:02.998 Compiler for C supports arguments -mavx512f: YES 00:02:02.998 Checking if "AVX512 checking" compiles: YES 00:02:02.998 Fetching value of define "__SSE4_2__" : 1 00:02:02.998 Fetching value of define "__AES__" : 1 00:02:02.998 Fetching value of define "__AVX__" : 1 00:02:02.998 Fetching value of define "__AVX2__" : 1 00:02:02.998 Fetching value of define "__AVX512BW__" : 1 00:02:02.998 Fetching value of define "__AVX512CD__" : 1 00:02:02.998 Fetching value of define "__AVX512DQ__" : 1 00:02:02.998 Fetching value of define "__AVX512F__" : 1 00:02:02.998 Fetching value of define "__AVX512VL__" : 1 00:02:02.998 Fetching value of define "__PCLMUL__" : 1 00:02:02.998 Fetching value of define "__RDRND__" : 1 00:02:02.998 Fetching value of define "__RDSEED__" : 1 00:02:02.998 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:02.998 Fetching value of define "__znver1__" : (undefined) 00:02:02.998 Fetching value of define "__znver2__" : (undefined) 00:02:02.998 Fetching value of define "__znver3__" : (undefined) 00:02:02.998 Fetching value of define "__znver4__" : (undefined) 00:02:02.998 Library asan found: YES 00:02:02.998 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:02.998 Message: lib/log: Defining dependency "log" 00:02:02.998 Message: lib/kvargs: Defining dependency "kvargs" 00:02:02.998 Message: lib/telemetry: Defining dependency "telemetry" 00:02:02.998 Library rt found: YES 00:02:02.998 Checking for function "getentropy" : NO 00:02:02.998 Message: lib/eal: Defining dependency "eal" 00:02:02.998 Message: lib/ring: Defining dependency "ring" 00:02:02.998 Message: lib/rcu: Defining dependency "rcu" 00:02:02.998 Message: lib/mempool: Defining dependency "mempool" 00:02:02.998 Message: lib/mbuf: Defining dependency "mbuf" 00:02:02.998 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:02.998 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:02.998 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:02.998 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:02.998 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:02.998 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:02.998 Compiler for C supports arguments -mpclmul: YES 00:02:02.998 Compiler for C supports arguments -maes: YES 00:02:02.998 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.998 Compiler for C supports arguments -mavx512bw: YES 00:02:02.998 Compiler for C supports arguments -mavx512dq: YES 00:02:02.998 Compiler for C supports arguments -mavx512vl: YES 00:02:02.998 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:02.998 Compiler for C supports arguments -mavx2: YES 00:02:02.998 Compiler for C supports arguments -mavx: YES 00:02:02.998 Message: lib/net: Defining dependency "net" 00:02:02.998 Message: lib/meter: Defining dependency "meter" 00:02:02.998 Message: lib/ethdev: Defining dependency "ethdev" 00:02:02.998 Message: lib/pci: Defining dependency "pci" 00:02:02.998 Message: lib/cmdline: Defining dependency "cmdline" 00:02:02.998 Message: lib/hash: Defining dependency "hash" 00:02:02.998 Message: lib/timer: Defining dependency "timer" 00:02:02.998 Message: lib/compressdev: Defining dependency "compressdev" 00:02:02.998 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:02.998 Message: lib/dmadev: Defining dependency "dmadev" 00:02:02.998 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:02.998 Message: lib/power: Defining dependency "power" 00:02:02.998 Message: lib/reorder: Defining dependency "reorder" 00:02:02.998 Message: lib/security: Defining dependency "security" 00:02:02.998 Has header "linux/userfaultfd.h" : YES 00:02:02.998 Has header "linux/vduse.h" : YES 00:02:02.998 Message: lib/vhost: Defining dependency "vhost" 00:02:02.998 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:02.998 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:02.999 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:02.999 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:02.999 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:02.999 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:02.999 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:02.999 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:02.999 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:02.999 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:02.999 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:02.999 Configuring doxy-api-html.conf using configuration 00:02:02.999 Configuring doxy-api-man.conf using configuration 00:02:02.999 Program mandb found: YES (/usr/bin/mandb) 00:02:02.999 Program sphinx-build found: NO 00:02:02.999 Configuring rte_build_config.h using configuration 00:02:02.999 Message: 00:02:02.999 ================= 00:02:02.999 Applications Enabled 00:02:02.999 ================= 00:02:02.999 00:02:02.999 apps: 00:02:02.999 00:02:02.999 00:02:02.999 Message: 00:02:02.999 ================= 00:02:02.999 Libraries Enabled 00:02:02.999 ================= 00:02:02.999 00:02:02.999 libs: 00:02:02.999 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:02.999 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:02.999 cryptodev, dmadev, power, reorder, security, vhost, 00:02:02.999 00:02:02.999 Message: 00:02:02.999 =============== 00:02:02.999 Drivers Enabled 00:02:02.999 =============== 00:02:02.999 00:02:02.999 common: 00:02:02.999 00:02:02.999 bus: 00:02:02.999 pci, vdev, 00:02:02.999 mempool: 00:02:02.999 ring, 00:02:02.999 dma: 00:02:02.999 00:02:02.999 net: 00:02:02.999 00:02:02.999 crypto: 00:02:02.999 00:02:02.999 compress: 00:02:02.999 00:02:02.999 vdpa: 00:02:02.999 00:02:02.999 00:02:02.999 Message: 00:02:02.999 ================= 00:02:02.999 Content Skipped 00:02:02.999 ================= 00:02:02.999 00:02:02.999 apps: 00:02:02.999 dumpcap: explicitly disabled via build config 00:02:02.999 graph: explicitly disabled via build config 00:02:02.999 pdump: explicitly disabled via build config 00:02:02.999 proc-info: explicitly disabled via build config 00:02:02.999 test-acl: explicitly disabled via build config 00:02:02.999 test-bbdev: explicitly disabled via build config 00:02:02.999 test-cmdline: explicitly disabled via build config 00:02:02.999 test-compress-perf: explicitly disabled via build config 00:02:02.999 test-crypto-perf: explicitly disabled via build config 00:02:02.999 test-dma-perf: explicitly disabled via build config 00:02:02.999 test-eventdev: explicitly disabled via build config 00:02:02.999 test-fib: explicitly disabled via build config 00:02:02.999 test-flow-perf: explicitly disabled via build config 00:02:02.999 test-gpudev: explicitly disabled via build config 00:02:02.999 test-mldev: explicitly disabled via build config 00:02:02.999 test-pipeline: explicitly disabled via build config 00:02:02.999 test-pmd: explicitly disabled via build config 00:02:02.999 test-regex: explicitly disabled via build config 00:02:02.999 test-sad: explicitly disabled via build config 00:02:02.999 test-security-perf: explicitly disabled via build config 00:02:02.999 00:02:02.999 libs: 00:02:02.999 argparse: explicitly disabled via build config 00:02:02.999 metrics: explicitly disabled via build config 00:02:02.999 acl: explicitly disabled via build config 00:02:02.999 bbdev: explicitly disabled via build config 00:02:02.999 bitratestats: explicitly disabled via build config 00:02:02.999 bpf: explicitly disabled via build config 00:02:02.999 cfgfile: explicitly disabled via build config 00:02:02.999 distributor: explicitly disabled via build config 00:02:02.999 efd: explicitly disabled via build config 00:02:02.999 eventdev: explicitly disabled via build config 00:02:02.999 dispatcher: explicitly disabled via build config 00:02:02.999 gpudev: explicitly disabled via build config 00:02:02.999 gro: explicitly disabled via build config 00:02:02.999 gso: explicitly disabled via build config 00:02:02.999 ip_frag: explicitly disabled via build config 00:02:02.999 jobstats: explicitly disabled via build config 00:02:02.999 latencystats: explicitly disabled via build config 00:02:02.999 lpm: explicitly disabled via build config 00:02:02.999 member: explicitly disabled via build config 00:02:02.999 pcapng: explicitly disabled via build config 00:02:02.999 rawdev: explicitly disabled via build config 00:02:02.999 regexdev: explicitly disabled via build config 00:02:02.999 mldev: explicitly disabled via build config 00:02:02.999 rib: explicitly disabled via build config 00:02:02.999 sched: explicitly disabled via build config 00:02:02.999 stack: explicitly disabled via build config 00:02:02.999 ipsec: explicitly disabled via build config 00:02:02.999 pdcp: explicitly disabled via build config 00:02:02.999 fib: explicitly disabled via build config 00:02:02.999 port: explicitly disabled via build config 00:02:02.999 pdump: explicitly disabled via build config 00:02:02.999 table: explicitly disabled via build config 00:02:02.999 pipeline: explicitly disabled via build config 00:02:02.999 graph: explicitly disabled via build config 00:02:02.999 node: explicitly disabled via build config 00:02:02.999 00:02:02.999 drivers: 00:02:02.999 common/cpt: not in enabled drivers build config 00:02:02.999 common/dpaax: not in enabled drivers build config 00:02:02.999 common/iavf: not in enabled drivers build config 00:02:02.999 common/idpf: not in enabled drivers build config 00:02:02.999 common/ionic: not in enabled drivers build config 00:02:02.999 common/mvep: not in enabled drivers build config 00:02:02.999 common/octeontx: not in enabled drivers build config 00:02:02.999 bus/auxiliary: not in enabled drivers build config 00:02:02.999 bus/cdx: not in enabled drivers build config 00:02:02.999 bus/dpaa: not in enabled drivers build config 00:02:02.999 bus/fslmc: not in enabled drivers build config 00:02:02.999 bus/ifpga: not in enabled drivers build config 00:02:02.999 bus/platform: not in enabled drivers build config 00:02:02.999 bus/uacce: not in enabled drivers build config 00:02:02.999 bus/vmbus: not in enabled drivers build config 00:02:02.999 common/cnxk: not in enabled drivers build config 00:02:02.999 common/mlx5: not in enabled drivers build config 00:02:02.999 common/nfp: not in enabled drivers build config 00:02:02.999 common/nitrox: not in enabled drivers build config 00:02:02.999 common/qat: not in enabled drivers build config 00:02:02.999 common/sfc_efx: not in enabled drivers build config 00:02:02.999 mempool/bucket: not in enabled drivers build config 00:02:02.999 mempool/cnxk: not in enabled drivers build config 00:02:02.999 mempool/dpaa: not in enabled drivers build config 00:02:02.999 mempool/dpaa2: not in enabled drivers build config 00:02:02.999 mempool/octeontx: not in enabled drivers build config 00:02:02.999 mempool/stack: not in enabled drivers build config 00:02:02.999 dma/cnxk: not in enabled drivers build config 00:02:02.999 dma/dpaa: not in enabled drivers build config 00:02:02.999 dma/dpaa2: not in enabled drivers build config 00:02:02.999 dma/hisilicon: not in enabled drivers build config 00:02:02.999 dma/idxd: not in enabled drivers build config 00:02:02.999 dma/ioat: not in enabled drivers build config 00:02:02.999 dma/skeleton: not in enabled drivers build config 00:02:02.999 net/af_packet: not in enabled drivers build config 00:02:02.999 net/af_xdp: not in enabled drivers build config 00:02:02.999 net/ark: not in enabled drivers build config 00:02:02.999 net/atlantic: not in enabled drivers build config 00:02:02.999 net/avp: not in enabled drivers build config 00:02:02.999 net/axgbe: not in enabled drivers build config 00:02:02.999 net/bnx2x: not in enabled drivers build config 00:02:02.999 net/bnxt: not in enabled drivers build config 00:02:02.999 net/bonding: not in enabled drivers build config 00:02:02.999 net/cnxk: not in enabled drivers build config 00:02:02.999 net/cpfl: not in enabled drivers build config 00:02:02.999 net/cxgbe: not in enabled drivers build config 00:02:02.999 net/dpaa: not in enabled drivers build config 00:02:02.999 net/dpaa2: not in enabled drivers build config 00:02:02.999 net/e1000: not in enabled drivers build config 00:02:02.999 net/ena: not in enabled drivers build config 00:02:02.999 net/enetc: not in enabled drivers build config 00:02:02.999 net/enetfec: not in enabled drivers build config 00:02:02.999 net/enic: not in enabled drivers build config 00:02:02.999 net/failsafe: not in enabled drivers build config 00:02:02.999 net/fm10k: not in enabled drivers build config 00:02:02.999 net/gve: not in enabled drivers build config 00:02:02.999 net/hinic: not in enabled drivers build config 00:02:02.999 net/hns3: not in enabled drivers build config 00:02:02.999 net/i40e: not in enabled drivers build config 00:02:02.999 net/iavf: not in enabled drivers build config 00:02:02.999 net/ice: not in enabled drivers build config 00:02:02.999 net/idpf: not in enabled drivers build config 00:02:02.999 net/igc: not in enabled drivers build config 00:02:02.999 net/ionic: not in enabled drivers build config 00:02:02.999 net/ipn3ke: not in enabled drivers build config 00:02:02.999 net/ixgbe: not in enabled drivers build config 00:02:02.999 net/mana: not in enabled drivers build config 00:02:02.999 net/memif: not in enabled drivers build config 00:02:02.999 net/mlx4: not in enabled drivers build config 00:02:02.999 net/mlx5: not in enabled drivers build config 00:02:02.999 net/mvneta: not in enabled drivers build config 00:02:02.999 net/mvpp2: not in enabled drivers build config 00:02:02.999 net/netvsc: not in enabled drivers build config 00:02:02.999 net/nfb: not in enabled drivers build config 00:02:02.999 net/nfp: not in enabled drivers build config 00:02:02.999 net/ngbe: not in enabled drivers build config 00:02:02.999 net/null: not in enabled drivers build config 00:02:02.999 net/octeontx: not in enabled drivers build config 00:02:02.999 net/octeon_ep: not in enabled drivers build config 00:02:02.999 net/pcap: not in enabled drivers build config 00:02:03.000 net/pfe: not in enabled drivers build config 00:02:03.000 net/qede: not in enabled drivers build config 00:02:03.000 net/ring: not in enabled drivers build config 00:02:03.000 net/sfc: not in enabled drivers build config 00:02:03.000 net/softnic: not in enabled drivers build config 00:02:03.000 net/tap: not in enabled drivers build config 00:02:03.000 net/thunderx: not in enabled drivers build config 00:02:03.000 net/txgbe: not in enabled drivers build config 00:02:03.000 net/vdev_netvsc: not in enabled drivers build config 00:02:03.000 net/vhost: not in enabled drivers build config 00:02:03.000 net/virtio: not in enabled drivers build config 00:02:03.000 net/vmxnet3: not in enabled drivers build config 00:02:03.000 raw/*: missing internal dependency, "rawdev" 00:02:03.000 crypto/armv8: not in enabled drivers build config 00:02:03.000 crypto/bcmfs: not in enabled drivers build config 00:02:03.000 crypto/caam_jr: not in enabled drivers build config 00:02:03.000 crypto/ccp: not in enabled drivers build config 00:02:03.000 crypto/cnxk: not in enabled drivers build config 00:02:03.000 crypto/dpaa_sec: not in enabled drivers build config 00:02:03.000 crypto/dpaa2_sec: not in enabled drivers build config 00:02:03.000 crypto/ipsec_mb: not in enabled drivers build config 00:02:03.000 crypto/mlx5: not in enabled drivers build config 00:02:03.000 crypto/mvsam: not in enabled drivers build config 00:02:03.000 crypto/nitrox: not in enabled drivers build config 00:02:03.000 crypto/null: not in enabled drivers build config 00:02:03.000 crypto/octeontx: not in enabled drivers build config 00:02:03.000 crypto/openssl: not in enabled drivers build config 00:02:03.000 crypto/scheduler: not in enabled drivers build config 00:02:03.000 crypto/uadk: not in enabled drivers build config 00:02:03.000 crypto/virtio: not in enabled drivers build config 00:02:03.000 compress/isal: not in enabled drivers build config 00:02:03.000 compress/mlx5: not in enabled drivers build config 00:02:03.000 compress/nitrox: not in enabled drivers build config 00:02:03.000 compress/octeontx: not in enabled drivers build config 00:02:03.000 compress/zlib: not in enabled drivers build config 00:02:03.000 regex/*: missing internal dependency, "regexdev" 00:02:03.000 ml/*: missing internal dependency, "mldev" 00:02:03.000 vdpa/ifc: not in enabled drivers build config 00:02:03.000 vdpa/mlx5: not in enabled drivers build config 00:02:03.000 vdpa/nfp: not in enabled drivers build config 00:02:03.000 vdpa/sfc: not in enabled drivers build config 00:02:03.000 event/*: missing internal dependency, "eventdev" 00:02:03.000 baseband/*: missing internal dependency, "bbdev" 00:02:03.000 gpu/*: missing internal dependency, "gpudev" 00:02:03.000 00:02:03.000 00:02:03.000 Build targets in project: 84 00:02:03.000 00:02:03.000 DPDK 24.03.0 00:02:03.000 00:02:03.000 User defined options 00:02:03.000 buildtype : debug 00:02:03.000 default_library : shared 00:02:03.000 libdir : lib 00:02:03.000 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:03.000 b_sanitize : address 00:02:03.000 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:03.000 c_link_args : 00:02:03.000 cpu_instruction_set: native 00:02:03.000 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:03.000 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:03.000 enable_docs : false 00:02:03.000 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:03.000 enable_kmods : false 00:02:03.000 max_lcores : 128 00:02:03.000 tests : false 00:02:03.000 00:02:03.000 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.000 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:03.000 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:03.000 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:03.000 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:03.000 [4/267] Linking static target lib/librte_kvargs.a 00:02:03.000 [5/267] Linking static target lib/librte_log.a 00:02:03.000 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:03.000 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:03.000 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:03.000 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:03.000 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:03.000 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:03.000 [12/267] Linking static target lib/librte_telemetry.a 00:02:03.000 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:03.000 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:03.000 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:03.258 [16/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.258 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:03.258 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:03.516 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:03.516 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:03.516 [21/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.516 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:03.516 [23/267] Linking target lib/librte_log.so.24.1 00:02:03.516 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:03.774 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:03.774 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:03.774 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:03.774 [28/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:03.774 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:03.774 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:03.774 [31/267] Linking target lib/librte_kvargs.so.24.1 00:02:04.032 [32/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.032 [33/267] Linking target lib/librte_telemetry.so.24.1 00:02:04.032 [34/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:04.032 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:04.032 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:04.290 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:04.290 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:04.290 [39/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:04.290 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:04.290 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:04.290 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:04.290 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:04.290 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:04.290 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:04.548 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:04.549 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:04.549 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:04.806 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:04.806 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:04.806 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.806 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:04.806 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:04.806 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.065 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.065 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:05.065 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:05.065 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:05.323 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.323 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.323 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:05.323 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:05.323 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:05.323 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:05.323 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:05.323 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:05.582 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.582 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:05.840 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:05.840 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:05.840 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:05.840 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:05.840 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:05.840 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:05.840 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:05.840 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:05.840 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.098 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:06.098 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:06.098 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:06.098 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:06.356 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.356 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.356 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.356 [85/267] Linking static target lib/librte_ring.a 00:02:06.356 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:06.614 [87/267] Linking static target lib/librte_eal.a 00:02:06.614 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.614 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.614 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:06.614 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:06.871 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:06.871 [93/267] Linking static target lib/librte_mempool.a 00:02:06.871 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.871 [95/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.871 [96/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:06.871 [97/267] Linking static target lib/librte_rcu.a 00:02:06.871 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:06.871 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.128 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.128 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.128 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.128 [103/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.128 [104/267] Linking static target lib/librte_mbuf.a 00:02:07.128 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.128 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:07.128 [107/267] Linking static target lib/librte_meter.a 00:02:07.386 [108/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.386 [109/267] Linking static target lib/librte_net.a 00:02:07.386 [110/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.386 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.644 [112/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.644 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:07.644 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:07.644 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:07.644 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.644 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.902 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:07.902 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.160 [120/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.160 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:08.160 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:08.418 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:08.418 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:08.418 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:08.418 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:08.418 [127/267] Linking static target lib/librte_pci.a 00:02:08.418 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:08.418 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:08.418 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:08.418 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:08.418 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:08.418 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:08.676 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:08.676 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:08.676 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:08.676 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:08.676 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:08.676 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:08.676 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:08.676 [141/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:08.676 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:08.676 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:08.676 [144/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.676 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:08.676 [146/267] Linking static target lib/librte_cmdline.a 00:02:08.934 [147/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:08.934 [148/267] Linking static target lib/librte_timer.a 00:02:09.192 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:09.192 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:09.192 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:09.192 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:09.192 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:09.192 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:09.192 [155/267] Linking static target lib/librte_ethdev.a 00:02:09.461 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:09.461 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:09.461 [158/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.461 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:09.461 [160/267] Linking static target lib/librte_compressdev.a 00:02:09.718 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:09.718 [162/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:09.718 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:09.718 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:09.719 [165/267] Linking static target lib/librte_dmadev.a 00:02:09.976 [166/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:09.976 [167/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:09.976 [168/267] Linking static target lib/librte_hash.a 00:02:09.976 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:09.976 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:09.976 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:09.976 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.234 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:10.234 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.493 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:10.493 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:10.493 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:10.493 [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.493 [179/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:10.493 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:10.493 [181/267] Linking static target lib/librte_cryptodev.a 00:02:10.493 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:10.756 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:10.756 [184/267] Linking static target lib/librte_power.a 00:02:10.756 [185/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.756 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:10.756 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:10.756 [188/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:10.757 [189/267] Linking static target lib/librte_reorder.a 00:02:11.018 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:11.018 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:11.018 [192/267] Linking static target lib/librte_security.a 00:02:11.281 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.543 [194/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.543 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:11.543 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.543 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:11.543 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:11.543 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:11.804 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:11.804 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:11.804 [202/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:12.064 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:12.064 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:12.064 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:12.064 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:12.325 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:12.325 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:12.325 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:12.325 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.325 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:12.325 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:12.325 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.325 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.325 [215/267] Linking static target drivers/librte_bus_vdev.a 00:02:12.325 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.325 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.325 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:12.325 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:12.325 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:12.585 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:12.585 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.585 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.585 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:12.585 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.844 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.104 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:14.110 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.110 [229/267] Linking target lib/librte_eal.so.24.1 00:02:14.110 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:14.110 [231/267] Linking target lib/librte_timer.so.24.1 00:02:14.110 [232/267] Linking target lib/librte_meter.so.24.1 00:02:14.110 [233/267] Linking target lib/librte_ring.so.24.1 00:02:14.110 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:14.110 [235/267] Linking target lib/librte_pci.so.24.1 00:02:14.110 [236/267] Linking target lib/librte_dmadev.so.24.1 00:02:14.370 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:14.370 [238/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:14.370 [239/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:14.370 [240/267] Linking target lib/librte_rcu.so.24.1 00:02:14.370 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:14.370 [242/267] Linking target lib/librte_mempool.so.24.1 00:02:14.370 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:14.370 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:14.370 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:14.370 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:14.370 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:14.370 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:14.629 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:14.629 [250/267] Linking target lib/librte_compressdev.so.24.1 00:02:14.629 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:14.629 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:02:14.629 [253/267] Linking target lib/librte_net.so.24.1 00:02:14.629 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:14.629 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:14.629 [256/267] Linking target lib/librte_security.so.24.1 00:02:14.629 [257/267] Linking target lib/librte_hash.so.24.1 00:02:14.629 [258/267] Linking target lib/librte_cmdline.so.24.1 00:02:14.890 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:14.890 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.890 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:14.890 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:15.152 [263/267] Linking target lib/librte_power.so.24.1 00:02:15.724 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:15.724 [265/267] Linking static target lib/librte_vhost.a 00:02:17.111 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.111 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:17.111 INFO: autodetecting backend as ninja 00:02:17.111 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:32.023 CC lib/ut/ut.o 00:02:32.023 CC lib/ut_mock/mock.o 00:02:32.023 CC lib/log/log.o 00:02:32.023 CC lib/log/log_flags.o 00:02:32.023 CC lib/log/log_deprecated.o 00:02:32.023 LIB libspdk_ut_mock.a 00:02:32.023 LIB libspdk_log.a 00:02:32.023 LIB libspdk_ut.a 00:02:32.023 SO libspdk_ut_mock.so.6.0 00:02:32.023 SO libspdk_ut.so.2.0 00:02:32.023 SO libspdk_log.so.7.1 00:02:32.023 SYMLINK libspdk_ut_mock.so 00:02:32.023 SYMLINK libspdk_ut.so 00:02:32.023 SYMLINK libspdk_log.so 00:02:32.023 CC lib/util/base64.o 00:02:32.023 CC lib/util/bit_array.o 00:02:32.023 CC lib/util/cpuset.o 00:02:32.023 CC lib/ioat/ioat.o 00:02:32.023 CC lib/util/crc16.o 00:02:32.023 CC lib/util/crc32.o 00:02:32.023 CC lib/util/crc32c.o 00:02:32.023 CXX lib/trace_parser/trace.o 00:02:32.023 CC lib/dma/dma.o 00:02:32.023 CC lib/vfio_user/host/vfio_user_pci.o 00:02:32.023 CC lib/util/crc32_ieee.o 00:02:32.023 CC lib/util/crc64.o 00:02:32.023 CC lib/util/dif.o 00:02:32.023 LIB libspdk_dma.a 00:02:32.023 SO libspdk_dma.so.5.0 00:02:32.023 CC lib/util/fd.o 00:02:32.023 CC lib/util/fd_group.o 00:02:32.023 CC lib/util/file.o 00:02:32.023 CC lib/util/hexlify.o 00:02:32.023 CC lib/util/iov.o 00:02:32.023 SYMLINK libspdk_dma.so 00:02:32.023 CC lib/vfio_user/host/vfio_user.o 00:02:32.023 LIB libspdk_ioat.a 00:02:32.023 SO libspdk_ioat.so.7.0 00:02:32.023 CC lib/util/math.o 00:02:32.023 SYMLINK libspdk_ioat.so 00:02:32.023 CC lib/util/net.o 00:02:32.023 CC lib/util/pipe.o 00:02:32.023 CC lib/util/strerror_tls.o 00:02:32.023 CC lib/util/string.o 00:02:32.023 CC lib/util/uuid.o 00:02:32.023 LIB libspdk_vfio_user.a 00:02:32.023 CC lib/util/xor.o 00:02:32.023 CC lib/util/zipf.o 00:02:32.023 CC lib/util/md5.o 00:02:32.023 SO libspdk_vfio_user.so.5.0 00:02:32.023 SYMLINK libspdk_vfio_user.so 00:02:32.285 LIB libspdk_util.a 00:02:32.285 SO libspdk_util.so.10.1 00:02:32.285 LIB libspdk_trace_parser.a 00:02:32.285 SO libspdk_trace_parser.so.6.0 00:02:32.285 SYMLINK libspdk_util.so 00:02:32.285 SYMLINK libspdk_trace_parser.so 00:02:32.546 CC lib/conf/conf.o 00:02:32.546 CC lib/env_dpdk/env.o 00:02:32.546 CC lib/env_dpdk/memory.o 00:02:32.546 CC lib/vmd/vmd.o 00:02:32.546 CC lib/vmd/led.o 00:02:32.546 CC lib/env_dpdk/pci.o 00:02:32.546 CC lib/env_dpdk/init.o 00:02:32.546 CC lib/rdma_utils/rdma_utils.o 00:02:32.546 CC lib/json/json_parse.o 00:02:32.546 CC lib/idxd/idxd.o 00:02:32.546 CC lib/json/json_util.o 00:02:32.806 LIB libspdk_conf.a 00:02:32.806 SO libspdk_conf.so.6.0 00:02:32.806 CC lib/json/json_write.o 00:02:32.806 LIB libspdk_rdma_utils.a 00:02:32.806 SO libspdk_rdma_utils.so.1.0 00:02:32.806 SYMLINK libspdk_conf.so 00:02:32.806 CC lib/idxd/idxd_user.o 00:02:32.806 SYMLINK libspdk_rdma_utils.so 00:02:32.806 CC lib/env_dpdk/threads.o 00:02:32.806 CC lib/env_dpdk/pci_ioat.o 00:02:32.806 CC lib/env_dpdk/pci_virtio.o 00:02:33.067 CC lib/rdma_provider/common.o 00:02:33.067 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:33.067 CC lib/idxd/idxd_kernel.o 00:02:33.067 CC lib/env_dpdk/pci_vmd.o 00:02:33.067 CC lib/env_dpdk/pci_idxd.o 00:02:33.067 LIB libspdk_json.a 00:02:33.067 CC lib/env_dpdk/pci_event.o 00:02:33.067 SO libspdk_json.so.6.0 00:02:33.067 LIB libspdk_idxd.a 00:02:33.067 SYMLINK libspdk_json.so 00:02:33.067 CC lib/env_dpdk/sigbus_handler.o 00:02:33.067 CC lib/env_dpdk/pci_dpdk.o 00:02:33.067 SO libspdk_idxd.so.12.1 00:02:33.067 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:33.067 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:33.067 LIB libspdk_rdma_provider.a 00:02:33.328 LIB libspdk_vmd.a 00:02:33.328 SYMLINK libspdk_idxd.so 00:02:33.328 SO libspdk_rdma_provider.so.7.0 00:02:33.328 SO libspdk_vmd.so.6.0 00:02:33.328 CC lib/jsonrpc/jsonrpc_server.o 00:02:33.328 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:33.328 CC lib/jsonrpc/jsonrpc_client.o 00:02:33.328 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:33.328 SYMLINK libspdk_rdma_provider.so 00:02:33.328 SYMLINK libspdk_vmd.so 00:02:33.589 LIB libspdk_jsonrpc.a 00:02:33.589 SO libspdk_jsonrpc.so.6.0 00:02:33.589 SYMLINK libspdk_jsonrpc.so 00:02:33.850 CC lib/rpc/rpc.o 00:02:34.111 LIB libspdk_env_dpdk.a 00:02:34.111 LIB libspdk_rpc.a 00:02:34.111 SO libspdk_env_dpdk.so.15.1 00:02:34.111 SO libspdk_rpc.so.6.0 00:02:34.111 SYMLINK libspdk_rpc.so 00:02:34.111 SYMLINK libspdk_env_dpdk.so 00:02:34.371 CC lib/notify/notify.o 00:02:34.371 CC lib/notify/notify_rpc.o 00:02:34.371 CC lib/trace/trace.o 00:02:34.371 CC lib/trace/trace_flags.o 00:02:34.371 CC lib/keyring/keyring_rpc.o 00:02:34.371 CC lib/trace/trace_rpc.o 00:02:34.371 CC lib/keyring/keyring.o 00:02:34.371 LIB libspdk_notify.a 00:02:34.371 SO libspdk_notify.so.6.0 00:02:34.646 LIB libspdk_keyring.a 00:02:34.646 SYMLINK libspdk_notify.so 00:02:34.646 LIB libspdk_trace.a 00:02:34.646 SO libspdk_keyring.so.2.0 00:02:34.646 SO libspdk_trace.so.11.0 00:02:34.646 SYMLINK libspdk_keyring.so 00:02:34.646 SYMLINK libspdk_trace.so 00:02:34.956 CC lib/thread/thread.o 00:02:34.956 CC lib/thread/iobuf.o 00:02:34.956 CC lib/sock/sock.o 00:02:34.956 CC lib/sock/sock_rpc.o 00:02:35.217 LIB libspdk_sock.a 00:02:35.217 SO libspdk_sock.so.10.0 00:02:35.479 SYMLINK libspdk_sock.so 00:02:35.479 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:35.479 CC lib/nvme/nvme_ctrlr.o 00:02:35.479 CC lib/nvme/nvme_ns_cmd.o 00:02:35.479 CC lib/nvme/nvme_ns.o 00:02:35.479 CC lib/nvme/nvme_pcie.o 00:02:35.479 CC lib/nvme/nvme_qpair.o 00:02:35.479 CC lib/nvme/nvme.o 00:02:35.479 CC lib/nvme/nvme_fabric.o 00:02:35.479 CC lib/nvme/nvme_pcie_common.o 00:02:36.050 LIB libspdk_thread.a 00:02:36.050 SO libspdk_thread.so.11.0 00:02:36.050 CC lib/nvme/nvme_quirks.o 00:02:36.310 CC lib/nvme/nvme_transport.o 00:02:36.310 SYMLINK libspdk_thread.so 00:02:36.310 CC lib/nvme/nvme_discovery.o 00:02:36.310 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:36.310 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:36.310 CC lib/nvme/nvme_tcp.o 00:02:36.310 CC lib/accel/accel.o 00:02:36.310 CC lib/nvme/nvme_opal.o 00:02:36.587 CC lib/nvme/nvme_io_msg.o 00:02:36.587 CC lib/nvme/nvme_poll_group.o 00:02:36.869 CC lib/nvme/nvme_zns.o 00:02:36.869 CC lib/nvme/nvme_stubs.o 00:02:36.869 CC lib/nvme/nvme_auth.o 00:02:36.869 CC lib/nvme/nvme_cuse.o 00:02:36.869 CC lib/nvme/nvme_rdma.o 00:02:37.130 CC lib/accel/accel_rpc.o 00:02:37.130 CC lib/accel/accel_sw.o 00:02:37.391 CC lib/blob/blobstore.o 00:02:37.391 CC lib/virtio/virtio.o 00:02:37.391 CC lib/init/json_config.o 00:02:37.651 LIB libspdk_accel.a 00:02:37.651 SO libspdk_accel.so.16.0 00:02:37.651 CC lib/fsdev/fsdev.o 00:02:37.651 CC lib/init/subsystem.o 00:02:37.651 SYMLINK libspdk_accel.so 00:02:37.651 CC lib/init/subsystem_rpc.o 00:02:37.651 CC lib/virtio/virtio_vhost_user.o 00:02:37.651 CC lib/virtio/virtio_vfio_user.o 00:02:37.651 CC lib/blob/request.o 00:02:37.651 CC lib/blob/zeroes.o 00:02:37.651 CC lib/blob/blob_bs_dev.o 00:02:37.911 CC lib/init/rpc.o 00:02:37.911 CC lib/fsdev/fsdev_io.o 00:02:37.911 CC lib/virtio/virtio_pci.o 00:02:37.911 LIB libspdk_init.a 00:02:37.911 CC lib/fsdev/fsdev_rpc.o 00:02:37.911 SO libspdk_init.so.6.0 00:02:37.911 SYMLINK libspdk_init.so 00:02:38.172 CC lib/bdev/bdev.o 00:02:38.172 CC lib/bdev/bdev_rpc.o 00:02:38.172 CC lib/bdev/bdev_zone.o 00:02:38.172 CC lib/bdev/part.o 00:02:38.172 CC lib/event/app.o 00:02:38.172 LIB libspdk_virtio.a 00:02:38.172 SO libspdk_virtio.so.7.0 00:02:38.172 LIB libspdk_nvme.a 00:02:38.172 LIB libspdk_fsdev.a 00:02:38.172 CC lib/event/reactor.o 00:02:38.172 CC lib/bdev/scsi_nvme.o 00:02:38.172 SO libspdk_fsdev.so.2.0 00:02:38.172 SYMLINK libspdk_virtio.so 00:02:38.172 CC lib/event/log_rpc.o 00:02:38.433 CC lib/event/app_rpc.o 00:02:38.433 SYMLINK libspdk_fsdev.so 00:02:38.433 CC lib/event/scheduler_static.o 00:02:38.433 SO libspdk_nvme.so.15.0 00:02:38.433 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:38.693 SYMLINK libspdk_nvme.so 00:02:38.693 LIB libspdk_event.a 00:02:38.693 SO libspdk_event.so.14.0 00:02:38.693 SYMLINK libspdk_event.so 00:02:39.284 LIB libspdk_fuse_dispatcher.a 00:02:39.284 SO libspdk_fuse_dispatcher.so.1.0 00:02:39.284 SYMLINK libspdk_fuse_dispatcher.so 00:02:40.223 LIB libspdk_blob.a 00:02:40.483 SO libspdk_blob.so.12.0 00:02:40.483 SYMLINK libspdk_blob.so 00:02:40.743 CC lib/blobfs/blobfs.o 00:02:40.743 CC lib/blobfs/tree.o 00:02:40.743 CC lib/lvol/lvol.o 00:02:41.003 LIB libspdk_bdev.a 00:02:41.003 SO libspdk_bdev.so.17.0 00:02:41.003 SYMLINK libspdk_bdev.so 00:02:41.262 CC lib/scsi/lun.o 00:02:41.262 CC lib/scsi/dev.o 00:02:41.262 CC lib/scsi/scsi.o 00:02:41.262 CC lib/scsi/port.o 00:02:41.262 CC lib/nvmf/ctrlr.o 00:02:41.262 CC lib/ftl/ftl_core.o 00:02:41.262 CC lib/ublk/ublk.o 00:02:41.262 CC lib/nbd/nbd.o 00:02:41.573 CC lib/ublk/ublk_rpc.o 00:02:41.574 CC lib/scsi/scsi_bdev.o 00:02:41.574 CC lib/ftl/ftl_init.o 00:02:41.574 LIB libspdk_blobfs.a 00:02:41.574 SO libspdk_blobfs.so.11.0 00:02:41.574 CC lib/nbd/nbd_rpc.o 00:02:41.574 CC lib/scsi/scsi_pr.o 00:02:41.574 CC lib/ftl/ftl_layout.o 00:02:41.574 SYMLINK libspdk_blobfs.so 00:02:41.574 CC lib/scsi/scsi_rpc.o 00:02:41.574 LIB libspdk_lvol.a 00:02:41.574 CC lib/nvmf/ctrlr_discovery.o 00:02:41.857 SO libspdk_lvol.so.11.0 00:02:41.857 CC lib/ftl/ftl_debug.o 00:02:41.857 SYMLINK libspdk_lvol.so 00:02:41.857 LIB libspdk_nbd.a 00:02:41.857 CC lib/nvmf/ctrlr_bdev.o 00:02:41.857 CC lib/ftl/ftl_io.o 00:02:41.857 SO libspdk_nbd.so.7.0 00:02:41.857 SYMLINK libspdk_nbd.so 00:02:41.857 CC lib/ftl/ftl_sb.o 00:02:41.857 CC lib/ftl/ftl_l2p.o 00:02:41.857 CC lib/scsi/task.o 00:02:41.857 CC lib/ftl/ftl_l2p_flat.o 00:02:41.857 LIB libspdk_ublk.a 00:02:41.857 CC lib/ftl/ftl_nv_cache.o 00:02:41.857 CC lib/ftl/ftl_band.o 00:02:42.116 SO libspdk_ublk.so.3.0 00:02:42.116 CC lib/ftl/ftl_band_ops.o 00:02:42.116 LIB libspdk_scsi.a 00:02:42.116 CC lib/nvmf/subsystem.o 00:02:42.116 SYMLINK libspdk_ublk.so 00:02:42.117 SO libspdk_scsi.so.9.0 00:02:42.117 CC lib/ftl/ftl_writer.o 00:02:42.117 CC lib/nvmf/nvmf.o 00:02:42.117 CC lib/ftl/ftl_rq.o 00:02:42.117 SYMLINK libspdk_scsi.so 00:02:42.117 CC lib/ftl/ftl_reloc.o 00:02:42.377 CC lib/nvmf/nvmf_rpc.o 00:02:42.377 CC lib/ftl/ftl_l2p_cache.o 00:02:42.377 CC lib/iscsi/conn.o 00:02:42.377 CC lib/vhost/vhost.o 00:02:42.377 CC lib/iscsi/init_grp.o 00:02:42.377 CC lib/ftl/ftl_p2l.o 00:02:42.636 CC lib/iscsi/iscsi.o 00:02:42.898 CC lib/iscsi/param.o 00:02:42.898 CC lib/iscsi/portal_grp.o 00:02:42.898 CC lib/iscsi/tgt_node.o 00:02:42.898 CC lib/ftl/ftl_p2l_log.o 00:02:43.158 CC lib/ftl/mngt/ftl_mngt.o 00:02:43.158 CC lib/vhost/vhost_rpc.o 00:02:43.158 CC lib/nvmf/transport.o 00:02:43.158 CC lib/nvmf/tcp.o 00:02:43.158 CC lib/iscsi/iscsi_subsystem.o 00:02:43.158 CC lib/iscsi/iscsi_rpc.o 00:02:43.419 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:43.419 CC lib/vhost/vhost_scsi.o 00:02:43.419 CC lib/iscsi/task.o 00:02:43.419 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:43.419 CC lib/vhost/vhost_blk.o 00:02:43.419 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:43.419 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:43.679 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:43.679 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:43.679 CC lib/nvmf/stubs.o 00:02:43.679 CC lib/nvmf/mdns_server.o 00:02:43.679 CC lib/nvmf/rdma.o 00:02:43.679 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:43.941 CC lib/nvmf/auth.o 00:02:43.941 CC lib/vhost/rte_vhost_user.o 00:02:43.941 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:43.941 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:44.201 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:44.201 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:44.201 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:44.201 LIB libspdk_iscsi.a 00:02:44.201 CC lib/ftl/utils/ftl_conf.o 00:02:44.201 SO libspdk_iscsi.so.8.0 00:02:44.201 CC lib/ftl/utils/ftl_md.o 00:02:44.462 CC lib/ftl/utils/ftl_mempool.o 00:02:44.462 SYMLINK libspdk_iscsi.so 00:02:44.462 CC lib/ftl/utils/ftl_bitmap.o 00:02:44.462 CC lib/ftl/utils/ftl_property.o 00:02:44.462 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:44.462 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:44.462 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:44.462 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:44.462 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:44.723 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:44.723 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:44.723 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:44.723 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:44.723 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:44.723 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:44.723 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:44.723 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:44.723 CC lib/ftl/base/ftl_base_dev.o 00:02:44.723 CC lib/ftl/base/ftl_base_bdev.o 00:02:44.723 CC lib/ftl/ftl_trace.o 00:02:44.985 LIB libspdk_vhost.a 00:02:44.985 SO libspdk_vhost.so.8.0 00:02:44.985 SYMLINK libspdk_vhost.so 00:02:44.985 LIB libspdk_ftl.a 00:02:45.246 SO libspdk_ftl.so.9.0 00:02:45.507 SYMLINK libspdk_ftl.so 00:02:45.508 LIB libspdk_nvmf.a 00:02:45.768 SO libspdk_nvmf.so.20.0 00:02:46.029 SYMLINK libspdk_nvmf.so 00:02:46.288 CC module/env_dpdk/env_dpdk_rpc.o 00:02:46.288 CC module/keyring/file/keyring.o 00:02:46.288 CC module/sock/posix/posix.o 00:02:46.288 CC module/blob/bdev/blob_bdev.o 00:02:46.288 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:46.288 CC module/accel/dsa/accel_dsa.o 00:02:46.288 CC module/keyring/linux/keyring.o 00:02:46.288 CC module/fsdev/aio/fsdev_aio.o 00:02:46.288 CC module/accel/error/accel_error.o 00:02:46.288 CC module/accel/ioat/accel_ioat.o 00:02:46.288 LIB libspdk_env_dpdk_rpc.a 00:02:46.288 SO libspdk_env_dpdk_rpc.so.6.0 00:02:46.288 CC module/keyring/file/keyring_rpc.o 00:02:46.288 SYMLINK libspdk_env_dpdk_rpc.so 00:02:46.288 CC module/accel/ioat/accel_ioat_rpc.o 00:02:46.288 CC module/keyring/linux/keyring_rpc.o 00:02:46.549 CC module/accel/error/accel_error_rpc.o 00:02:46.549 LIB libspdk_scheduler_dynamic.a 00:02:46.549 LIB libspdk_keyring_file.a 00:02:46.549 SO libspdk_scheduler_dynamic.so.4.0 00:02:46.549 LIB libspdk_accel_ioat.a 00:02:46.549 SO libspdk_keyring_file.so.2.0 00:02:46.549 LIB libspdk_keyring_linux.a 00:02:46.549 SO libspdk_accel_ioat.so.6.0 00:02:46.549 SYMLINK libspdk_scheduler_dynamic.so 00:02:46.549 LIB libspdk_blob_bdev.a 00:02:46.549 SO libspdk_keyring_linux.so.1.0 00:02:46.549 CC module/accel/dsa/accel_dsa_rpc.o 00:02:46.549 SYMLINK libspdk_keyring_file.so 00:02:46.549 SO libspdk_blob_bdev.so.12.0 00:02:46.549 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:46.549 LIB libspdk_accel_error.a 00:02:46.549 SYMLINK libspdk_accel_ioat.so 00:02:46.549 CC module/accel/iaa/accel_iaa.o 00:02:46.549 CC module/accel/iaa/accel_iaa_rpc.o 00:02:46.549 SYMLINK libspdk_keyring_linux.so 00:02:46.549 SO libspdk_accel_error.so.2.0 00:02:46.549 SYMLINK libspdk_blob_bdev.so 00:02:46.549 CC module/fsdev/aio/linux_aio_mgr.o 00:02:46.549 SYMLINK libspdk_accel_error.so 00:02:46.549 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:46.809 LIB libspdk_accel_dsa.a 00:02:46.809 SO libspdk_accel_dsa.so.5.0 00:02:46.809 LIB libspdk_accel_iaa.a 00:02:46.809 CC module/scheduler/gscheduler/gscheduler.o 00:02:46.809 SO libspdk_accel_iaa.so.3.0 00:02:46.809 SYMLINK libspdk_accel_dsa.so 00:02:46.809 SYMLINK libspdk_accel_iaa.so 00:02:46.809 LIB libspdk_scheduler_dpdk_governor.a 00:02:46.809 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:46.809 LIB libspdk_scheduler_gscheduler.a 00:02:46.809 LIB libspdk_sock_posix.a 00:02:46.809 CC module/bdev/error/vbdev_error.o 00:02:46.809 CC module/bdev/delay/vbdev_delay.o 00:02:46.809 CC module/blobfs/bdev/blobfs_bdev.o 00:02:46.809 SO libspdk_scheduler_gscheduler.so.4.0 00:02:46.809 CC module/bdev/gpt/gpt.o 00:02:46.809 SO libspdk_sock_posix.so.6.0 00:02:46.809 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:46.809 CC module/bdev/gpt/vbdev_gpt.o 00:02:47.071 LIB libspdk_fsdev_aio.a 00:02:47.071 CC module/bdev/lvol/vbdev_lvol.o 00:02:47.071 CC module/bdev/malloc/bdev_malloc.o 00:02:47.071 SO libspdk_fsdev_aio.so.1.0 00:02:47.071 SYMLINK libspdk_scheduler_gscheduler.so 00:02:47.071 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:47.071 SYMLINK libspdk_sock_posix.so 00:02:47.071 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:47.071 SYMLINK libspdk_fsdev_aio.so 00:02:47.071 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:47.071 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:47.071 CC module/bdev/error/vbdev_error_rpc.o 00:02:47.071 LIB libspdk_blobfs_bdev.a 00:02:47.071 SO libspdk_blobfs_bdev.so.6.0 00:02:47.071 SYMLINK libspdk_blobfs_bdev.so 00:02:47.071 LIB libspdk_bdev_gpt.a 00:02:47.332 SO libspdk_bdev_gpt.so.6.0 00:02:47.332 CC module/bdev/null/bdev_null.o 00:02:47.332 LIB libspdk_bdev_error.a 00:02:47.332 LIB libspdk_bdev_delay.a 00:02:47.332 CC module/bdev/nvme/bdev_nvme.o 00:02:47.332 SO libspdk_bdev_error.so.6.0 00:02:47.332 SO libspdk_bdev_delay.so.6.0 00:02:47.332 CC module/bdev/passthru/vbdev_passthru.o 00:02:47.332 SYMLINK libspdk_bdev_gpt.so 00:02:47.332 LIB libspdk_bdev_malloc.a 00:02:47.332 CC module/bdev/raid/bdev_raid.o 00:02:47.332 SO libspdk_bdev_malloc.so.6.0 00:02:47.332 SYMLINK libspdk_bdev_error.so 00:02:47.332 CC module/bdev/raid/bdev_raid_rpc.o 00:02:47.332 SYMLINK libspdk_bdev_delay.so 00:02:47.332 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:47.332 CC module/bdev/nvme/nvme_rpc.o 00:02:47.332 SYMLINK libspdk_bdev_malloc.so 00:02:47.332 CC module/bdev/nvme/bdev_mdns_client.o 00:02:47.332 CC module/bdev/split/vbdev_split.o 00:02:47.332 LIB libspdk_bdev_lvol.a 00:02:47.332 CC module/bdev/null/bdev_null_rpc.o 00:02:47.593 SO libspdk_bdev_lvol.so.6.0 00:02:47.593 CC module/bdev/split/vbdev_split_rpc.o 00:02:47.593 CC module/bdev/raid/bdev_raid_sb.o 00:02:47.593 SYMLINK libspdk_bdev_lvol.so 00:02:47.593 CC module/bdev/raid/raid0.o 00:02:47.593 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:47.593 CC module/bdev/raid/raid1.o 00:02:47.593 LIB libspdk_bdev_null.a 00:02:47.593 SO libspdk_bdev_null.so.6.0 00:02:47.593 CC module/bdev/nvme/vbdev_opal.o 00:02:47.593 LIB libspdk_bdev_split.a 00:02:47.593 SO libspdk_bdev_split.so.6.0 00:02:47.593 SYMLINK libspdk_bdev_null.so 00:02:47.593 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:47.853 SYMLINK libspdk_bdev_split.so 00:02:47.853 LIB libspdk_bdev_passthru.a 00:02:47.853 SO libspdk_bdev_passthru.so.6.0 00:02:47.853 CC module/bdev/raid/concat.o 00:02:47.853 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:47.853 SYMLINK libspdk_bdev_passthru.so 00:02:47.853 CC module/bdev/xnvme/bdev_xnvme.o 00:02:47.853 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:47.853 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:47.853 CC module/bdev/aio/bdev_aio.o 00:02:48.147 CC module/bdev/ftl/bdev_ftl.o 00:02:48.147 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:48.147 CC module/bdev/iscsi/bdev_iscsi.o 00:02:48.147 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:48.147 LIB libspdk_bdev_xnvme.a 00:02:48.147 SO libspdk_bdev_xnvme.so.3.0 00:02:48.147 SYMLINK libspdk_bdev_xnvme.so 00:02:48.147 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:48.147 CC module/bdev/aio/bdev_aio_rpc.o 00:02:48.147 LIB libspdk_bdev_zone_block.a 00:02:48.147 LIB libspdk_bdev_ftl.a 00:02:48.148 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:48.148 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:48.148 SO libspdk_bdev_zone_block.so.6.0 00:02:48.148 SO libspdk_bdev_ftl.so.6.0 00:02:48.407 SYMLINK libspdk_bdev_zone_block.so 00:02:48.407 SYMLINK libspdk_bdev_ftl.so 00:02:48.407 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:48.407 LIB libspdk_bdev_aio.a 00:02:48.407 SO libspdk_bdev_aio.so.6.0 00:02:48.407 LIB libspdk_bdev_iscsi.a 00:02:48.407 SYMLINK libspdk_bdev_aio.so 00:02:48.407 LIB libspdk_bdev_raid.a 00:02:48.407 SO libspdk_bdev_iscsi.so.6.0 00:02:48.407 SYMLINK libspdk_bdev_iscsi.so 00:02:48.407 SO libspdk_bdev_raid.so.6.0 00:02:48.669 SYMLINK libspdk_bdev_raid.so 00:02:48.669 LIB libspdk_bdev_virtio.a 00:02:48.928 SO libspdk_bdev_virtio.so.6.0 00:02:48.928 SYMLINK libspdk_bdev_virtio.so 00:02:49.868 LIB libspdk_bdev_nvme.a 00:02:49.869 SO libspdk_bdev_nvme.so.7.1 00:02:49.869 SYMLINK libspdk_bdev_nvme.so 00:02:50.506 CC module/event/subsystems/fsdev/fsdev.o 00:02:50.506 CC module/event/subsystems/vmd/vmd.o 00:02:50.506 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:50.506 CC module/event/subsystems/keyring/keyring.o 00:02:50.506 CC module/event/subsystems/iobuf/iobuf.o 00:02:50.506 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:50.506 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:50.506 CC module/event/subsystems/scheduler/scheduler.o 00:02:50.506 CC module/event/subsystems/sock/sock.o 00:02:50.506 LIB libspdk_event_fsdev.a 00:02:50.506 LIB libspdk_event_vmd.a 00:02:50.506 LIB libspdk_event_keyring.a 00:02:50.506 SO libspdk_event_fsdev.so.1.0 00:02:50.506 LIB libspdk_event_iobuf.a 00:02:50.506 LIB libspdk_event_vhost_blk.a 00:02:50.506 LIB libspdk_event_scheduler.a 00:02:50.506 SO libspdk_event_vmd.so.6.0 00:02:50.506 SO libspdk_event_keyring.so.1.0 00:02:50.506 LIB libspdk_event_sock.a 00:02:50.506 SO libspdk_event_scheduler.so.4.0 00:02:50.506 SO libspdk_event_iobuf.so.3.0 00:02:50.506 SO libspdk_event_vhost_blk.so.3.0 00:02:50.506 SO libspdk_event_sock.so.5.0 00:02:50.506 SYMLINK libspdk_event_fsdev.so 00:02:50.506 SYMLINK libspdk_event_vmd.so 00:02:50.506 SYMLINK libspdk_event_keyring.so 00:02:50.506 SYMLINK libspdk_event_scheduler.so 00:02:50.506 SYMLINK libspdk_event_vhost_blk.so 00:02:50.506 SYMLINK libspdk_event_iobuf.so 00:02:50.506 SYMLINK libspdk_event_sock.so 00:02:50.766 CC module/event/subsystems/accel/accel.o 00:02:50.766 LIB libspdk_event_accel.a 00:02:50.766 SO libspdk_event_accel.so.6.0 00:02:51.024 SYMLINK libspdk_event_accel.so 00:02:51.283 CC module/event/subsystems/bdev/bdev.o 00:02:51.283 LIB libspdk_event_bdev.a 00:02:51.283 SO libspdk_event_bdev.so.6.0 00:02:51.542 SYMLINK libspdk_event_bdev.so 00:02:51.542 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:51.542 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:51.542 CC module/event/subsystems/ublk/ublk.o 00:02:51.542 CC module/event/subsystems/scsi/scsi.o 00:02:51.542 CC module/event/subsystems/nbd/nbd.o 00:02:51.802 LIB libspdk_event_ublk.a 00:02:51.802 LIB libspdk_event_scsi.a 00:02:51.802 SO libspdk_event_ublk.so.3.0 00:02:51.802 SO libspdk_event_scsi.so.6.0 00:02:51.802 LIB libspdk_event_nbd.a 00:02:51.802 LIB libspdk_event_nvmf.a 00:02:51.802 SO libspdk_event_nbd.so.6.0 00:02:51.802 SO libspdk_event_nvmf.so.6.0 00:02:51.802 SYMLINK libspdk_event_ublk.so 00:02:51.802 SYMLINK libspdk_event_scsi.so 00:02:51.802 SYMLINK libspdk_event_nbd.so 00:02:51.802 SYMLINK libspdk_event_nvmf.so 00:02:52.063 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:52.063 CC module/event/subsystems/iscsi/iscsi.o 00:02:52.063 LIB libspdk_event_vhost_scsi.a 00:02:52.063 SO libspdk_event_vhost_scsi.so.3.0 00:02:52.063 LIB libspdk_event_iscsi.a 00:02:52.063 SO libspdk_event_iscsi.so.6.0 00:02:52.324 SYMLINK libspdk_event_vhost_scsi.so 00:02:52.324 SYMLINK libspdk_event_iscsi.so 00:02:52.324 SO libspdk.so.6.0 00:02:52.324 SYMLINK libspdk.so 00:02:52.584 CXX app/trace/trace.o 00:02:52.584 TEST_HEADER include/spdk/accel.h 00:02:52.584 TEST_HEADER include/spdk/accel_module.h 00:02:52.584 TEST_HEADER include/spdk/assert.h 00:02:52.584 CC app/trace_record/trace_record.o 00:02:52.584 TEST_HEADER include/spdk/barrier.h 00:02:52.584 TEST_HEADER include/spdk/base64.h 00:02:52.584 TEST_HEADER include/spdk/bdev.h 00:02:52.584 TEST_HEADER include/spdk/bdev_module.h 00:02:52.584 TEST_HEADER include/spdk/bdev_zone.h 00:02:52.584 TEST_HEADER include/spdk/bit_array.h 00:02:52.584 TEST_HEADER include/spdk/bit_pool.h 00:02:52.584 TEST_HEADER include/spdk/blob_bdev.h 00:02:52.584 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:52.584 TEST_HEADER include/spdk/blobfs.h 00:02:52.584 TEST_HEADER include/spdk/blob.h 00:02:52.584 TEST_HEADER include/spdk/conf.h 00:02:52.584 TEST_HEADER include/spdk/config.h 00:02:52.584 TEST_HEADER include/spdk/cpuset.h 00:02:52.584 TEST_HEADER include/spdk/crc16.h 00:02:52.584 CC app/iscsi_tgt/iscsi_tgt.o 00:02:52.584 TEST_HEADER include/spdk/crc32.h 00:02:52.584 TEST_HEADER include/spdk/crc64.h 00:02:52.584 TEST_HEADER include/spdk/dif.h 00:02:52.584 TEST_HEADER include/spdk/dma.h 00:02:52.584 TEST_HEADER include/spdk/endian.h 00:02:52.584 CC app/nvmf_tgt/nvmf_main.o 00:02:52.584 TEST_HEADER include/spdk/env_dpdk.h 00:02:52.584 TEST_HEADER include/spdk/env.h 00:02:52.584 TEST_HEADER include/spdk/event.h 00:02:52.584 TEST_HEADER include/spdk/fd_group.h 00:02:52.584 TEST_HEADER include/spdk/fd.h 00:02:52.584 TEST_HEADER include/spdk/file.h 00:02:52.584 TEST_HEADER include/spdk/fsdev.h 00:02:52.584 TEST_HEADER include/spdk/fsdev_module.h 00:02:52.584 TEST_HEADER include/spdk/ftl.h 00:02:52.584 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:52.584 TEST_HEADER include/spdk/gpt_spec.h 00:02:52.584 TEST_HEADER include/spdk/hexlify.h 00:02:52.584 TEST_HEADER include/spdk/histogram_data.h 00:02:52.584 TEST_HEADER include/spdk/idxd.h 00:02:52.584 TEST_HEADER include/spdk/idxd_spec.h 00:02:52.584 CC test/thread/poller_perf/poller_perf.o 00:02:52.584 TEST_HEADER include/spdk/init.h 00:02:52.584 TEST_HEADER include/spdk/ioat.h 00:02:52.584 TEST_HEADER include/spdk/ioat_spec.h 00:02:52.584 CC examples/ioat/perf/perf.o 00:02:52.584 CC examples/util/zipf/zipf.o 00:02:52.584 TEST_HEADER include/spdk/iscsi_spec.h 00:02:52.584 TEST_HEADER include/spdk/json.h 00:02:52.584 TEST_HEADER include/spdk/jsonrpc.h 00:02:52.584 TEST_HEADER include/spdk/keyring.h 00:02:52.584 TEST_HEADER include/spdk/keyring_module.h 00:02:52.584 TEST_HEADER include/spdk/likely.h 00:02:52.584 TEST_HEADER include/spdk/log.h 00:02:52.584 TEST_HEADER include/spdk/lvol.h 00:02:52.584 CC test/dma/test_dma/test_dma.o 00:02:52.584 TEST_HEADER include/spdk/md5.h 00:02:52.584 TEST_HEADER include/spdk/memory.h 00:02:52.584 TEST_HEADER include/spdk/mmio.h 00:02:52.584 TEST_HEADER include/spdk/nbd.h 00:02:52.584 TEST_HEADER include/spdk/net.h 00:02:52.584 TEST_HEADER include/spdk/notify.h 00:02:52.584 TEST_HEADER include/spdk/nvme.h 00:02:52.584 TEST_HEADER include/spdk/nvme_intel.h 00:02:52.584 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:52.584 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:52.584 TEST_HEADER include/spdk/nvme_spec.h 00:02:52.584 TEST_HEADER include/spdk/nvme_zns.h 00:02:52.584 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:52.584 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:52.584 TEST_HEADER include/spdk/nvmf.h 00:02:52.584 CC test/app/bdev_svc/bdev_svc.o 00:02:52.584 TEST_HEADER include/spdk/nvmf_spec.h 00:02:52.584 TEST_HEADER include/spdk/nvmf_transport.h 00:02:52.584 TEST_HEADER include/spdk/opal.h 00:02:52.584 TEST_HEADER include/spdk/opal_spec.h 00:02:52.584 TEST_HEADER include/spdk/pci_ids.h 00:02:52.584 TEST_HEADER include/spdk/pipe.h 00:02:52.584 TEST_HEADER include/spdk/queue.h 00:02:52.584 TEST_HEADER include/spdk/reduce.h 00:02:52.584 TEST_HEADER include/spdk/rpc.h 00:02:52.584 TEST_HEADER include/spdk/scheduler.h 00:02:52.584 TEST_HEADER include/spdk/scsi.h 00:02:52.584 TEST_HEADER include/spdk/scsi_spec.h 00:02:52.584 TEST_HEADER include/spdk/sock.h 00:02:52.584 TEST_HEADER include/spdk/stdinc.h 00:02:52.584 TEST_HEADER include/spdk/string.h 00:02:52.584 TEST_HEADER include/spdk/thread.h 00:02:52.584 TEST_HEADER include/spdk/trace.h 00:02:52.584 TEST_HEADER include/spdk/trace_parser.h 00:02:52.584 TEST_HEADER include/spdk/tree.h 00:02:52.584 TEST_HEADER include/spdk/ublk.h 00:02:52.584 TEST_HEADER include/spdk/util.h 00:02:52.584 TEST_HEADER include/spdk/uuid.h 00:02:52.584 TEST_HEADER include/spdk/version.h 00:02:52.584 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:52.584 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:52.584 TEST_HEADER include/spdk/vhost.h 00:02:52.584 TEST_HEADER include/spdk/vmd.h 00:02:52.584 TEST_HEADER include/spdk/xor.h 00:02:52.584 TEST_HEADER include/spdk/zipf.h 00:02:52.584 CXX test/cpp_headers/accel.o 00:02:52.845 LINK spdk_trace_record 00:02:52.845 LINK poller_perf 00:02:52.845 LINK nvmf_tgt 00:02:52.845 LINK zipf 00:02:52.845 LINK ioat_perf 00:02:52.845 LINK bdev_svc 00:02:52.845 CXX test/cpp_headers/accel_module.o 00:02:52.845 LINK iscsi_tgt 00:02:52.845 LINK spdk_trace 00:02:52.845 CC test/app/histogram_perf/histogram_perf.o 00:02:52.845 CC examples/ioat/verify/verify.o 00:02:52.845 CC test/app/jsoncat/jsoncat.o 00:02:53.106 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:53.106 CXX test/cpp_headers/assert.o 00:02:53.106 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:53.106 LINK histogram_perf 00:02:53.106 LINK jsoncat 00:02:53.106 LINK test_dma 00:02:53.106 CC test/event/event_perf/event_perf.o 00:02:53.106 CXX test/cpp_headers/barrier.o 00:02:53.106 LINK verify 00:02:53.106 CC app/spdk_tgt/spdk_tgt.o 00:02:53.106 CC test/env/mem_callbacks/mem_callbacks.o 00:02:53.106 LINK interrupt_tgt 00:02:53.368 LINK event_perf 00:02:53.368 CC test/event/reactor/reactor.o 00:02:53.368 CXX test/cpp_headers/base64.o 00:02:53.368 CC test/rpc_client/rpc_client_test.o 00:02:53.368 CC app/spdk_lspci/spdk_lspci.o 00:02:53.368 LINK nvme_fuzz 00:02:53.368 CC app/spdk_nvme_perf/perf.o 00:02:53.368 LINK spdk_tgt 00:02:53.368 LINK reactor 00:02:53.368 CXX test/cpp_headers/bdev.o 00:02:53.368 LINK rpc_client_test 00:02:53.628 LINK spdk_lspci 00:02:53.628 CC app/spdk_nvme_identify/identify.o 00:02:53.628 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:53.628 CC examples/thread/thread/thread_ex.o 00:02:53.628 CC test/event/reactor_perf/reactor_perf.o 00:02:53.628 CXX test/cpp_headers/bdev_module.o 00:02:53.629 CC test/event/app_repeat/app_repeat.o 00:02:53.629 LINK mem_callbacks 00:02:53.629 CC test/event/scheduler/scheduler.o 00:02:53.890 LINK reactor_perf 00:02:53.890 LINK app_repeat 00:02:53.890 CXX test/cpp_headers/bdev_zone.o 00:02:53.890 LINK thread 00:02:53.890 CC test/accel/dif/dif.o 00:02:53.890 CC test/env/vtophys/vtophys.o 00:02:53.890 CXX test/cpp_headers/bit_array.o 00:02:53.890 LINK scheduler 00:02:53.890 CC app/spdk_nvme_discover/discovery_aer.o 00:02:54.150 CC test/app/stub/stub.o 00:02:54.150 LINK vtophys 00:02:54.150 CXX test/cpp_headers/bit_pool.o 00:02:54.150 CC examples/sock/hello_world/hello_sock.o 00:02:54.150 LINK spdk_nvme_discover 00:02:54.150 LINK spdk_nvme_perf 00:02:54.150 LINK spdk_nvme_identify 00:02:54.150 LINK stub 00:02:54.150 CXX test/cpp_headers/blob_bdev.o 00:02:54.422 CC examples/vmd/lsvmd/lsvmd.o 00:02:54.422 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:54.422 CXX test/cpp_headers/blobfs_bdev.o 00:02:54.422 LINK hello_sock 00:02:54.422 LINK dif 00:02:54.422 LINK lsvmd 00:02:54.422 CC app/spdk_top/spdk_top.o 00:02:54.422 CC examples/idxd/perf/perf.o 00:02:54.422 LINK env_dpdk_post_init 00:02:54.422 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:54.422 CXX test/cpp_headers/blobfs.o 00:02:54.683 CC test/blobfs/mkfs/mkfs.o 00:02:54.683 CC test/env/memory/memory_ut.o 00:02:54.683 CC examples/vmd/led/led.o 00:02:54.683 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:54.683 CC test/env/pci/pci_ut.o 00:02:54.683 CXX test/cpp_headers/blob.o 00:02:54.683 LINK mkfs 00:02:54.683 LINK led 00:02:54.683 LINK hello_fsdev 00:02:54.683 LINK idxd_perf 00:02:54.683 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:54.683 CXX test/cpp_headers/conf.o 00:02:54.944 CXX test/cpp_headers/config.o 00:02:54.944 CXX test/cpp_headers/cpuset.o 00:02:54.944 CC examples/accel/perf/accel_perf.o 00:02:54.944 LINK pci_ut 00:02:54.944 CC app/vhost/vhost.o 00:02:54.944 CC app/spdk_dd/spdk_dd.o 00:02:55.204 CC test/lvol/esnap/esnap.o 00:02:55.204 CXX test/cpp_headers/crc16.o 00:02:55.204 CXX test/cpp_headers/crc32.o 00:02:55.204 LINK vhost 00:02:55.204 CXX test/cpp_headers/crc64.o 00:02:55.204 LINK vhost_fuzz 00:02:55.204 CXX test/cpp_headers/dif.o 00:02:55.204 CXX test/cpp_headers/dma.o 00:02:55.204 LINK iscsi_fuzz 00:02:55.204 CXX test/cpp_headers/endian.o 00:02:55.465 LINK spdk_top 00:02:55.465 LINK spdk_dd 00:02:55.465 LINK accel_perf 00:02:55.465 CC test/nvme/aer/aer.o 00:02:55.465 CXX test/cpp_headers/env_dpdk.o 00:02:55.465 LINK memory_ut 00:02:55.465 CC test/nvme/reset/reset.o 00:02:55.465 CC test/nvme/sgl/sgl.o 00:02:55.725 CC test/bdev/bdevio/bdevio.o 00:02:55.725 CXX test/cpp_headers/env.o 00:02:55.725 CC app/fio/nvme/fio_plugin.o 00:02:55.725 LINK aer 00:02:55.725 CC test/nvme/e2edp/nvme_dp.o 00:02:55.725 CXX test/cpp_headers/event.o 00:02:55.725 CC examples/blob/hello_world/hello_blob.o 00:02:55.725 LINK reset 00:02:55.725 CXX test/cpp_headers/fd_group.o 00:02:55.725 CC examples/blob/cli/blobcli.o 00:02:55.725 LINK sgl 00:02:55.725 LINK nvme_dp 00:02:55.987 LINK hello_blob 00:02:55.987 CC examples/nvme/hello_world/hello_world.o 00:02:55.987 CXX test/cpp_headers/fd.o 00:02:55.987 CC app/fio/bdev/fio_plugin.o 00:02:55.987 LINK bdevio 00:02:55.987 CC test/nvme/overhead/overhead.o 00:02:55.987 CXX test/cpp_headers/file.o 00:02:55.987 LINK hello_world 00:02:55.987 CC examples/bdev/hello_world/hello_bdev.o 00:02:56.285 CC examples/bdev/bdevperf/bdevperf.o 00:02:56.285 LINK spdk_nvme 00:02:56.285 CC examples/nvme/reconnect/reconnect.o 00:02:56.285 CXX test/cpp_headers/fsdev.o 00:02:56.285 LINK blobcli 00:02:56.285 CXX test/cpp_headers/fsdev_module.o 00:02:56.285 CXX test/cpp_headers/ftl.o 00:02:56.285 LINK overhead 00:02:56.285 LINK hello_bdev 00:02:56.285 CXX test/cpp_headers/fuse_dispatcher.o 00:02:56.570 CXX test/cpp_headers/gpt_spec.o 00:02:56.570 CC test/nvme/err_injection/err_injection.o 00:02:56.570 CXX test/cpp_headers/hexlify.o 00:02:56.570 CC test/nvme/startup/startup.o 00:02:56.570 CXX test/cpp_headers/histogram_data.o 00:02:56.570 CC test/nvme/reserve/reserve.o 00:02:56.570 LINK spdk_bdev 00:02:56.570 LINK reconnect 00:02:56.571 CXX test/cpp_headers/idxd.o 00:02:56.571 LINK err_injection 00:02:56.571 CXX test/cpp_headers/idxd_spec.o 00:02:56.571 LINK startup 00:02:56.571 CC test/nvme/simple_copy/simple_copy.o 00:02:56.571 LINK reserve 00:02:56.571 CC test/nvme/connect_stress/connect_stress.o 00:02:56.571 CXX test/cpp_headers/init.o 00:02:56.571 CXX test/cpp_headers/ioat.o 00:02:56.571 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:56.571 CXX test/cpp_headers/ioat_spec.o 00:02:56.832 CXX test/cpp_headers/iscsi_spec.o 00:02:56.832 CXX test/cpp_headers/json.o 00:02:56.832 LINK connect_stress 00:02:56.832 LINK bdevperf 00:02:56.832 LINK simple_copy 00:02:56.832 CC examples/nvme/arbitration/arbitration.o 00:02:56.832 CXX test/cpp_headers/jsonrpc.o 00:02:56.832 CXX test/cpp_headers/keyring.o 00:02:56.832 CXX test/cpp_headers/keyring_module.o 00:02:57.094 CC test/nvme/boot_partition/boot_partition.o 00:02:57.094 CC examples/nvme/hotplug/hotplug.o 00:02:57.094 CXX test/cpp_headers/likely.o 00:02:57.094 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:57.094 CC test/nvme/compliance/nvme_compliance.o 00:02:57.094 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:57.094 CC examples/nvme/abort/abort.o 00:02:57.094 LINK boot_partition 00:02:57.094 LINK nvme_manage 00:02:57.094 LINK arbitration 00:02:57.094 CXX test/cpp_headers/log.o 00:02:57.094 LINK cmb_copy 00:02:57.094 LINK hotplug 00:02:57.094 LINK pmr_persistence 00:02:57.355 CC test/nvme/fused_ordering/fused_ordering.o 00:02:57.355 CXX test/cpp_headers/lvol.o 00:02:57.355 CC test/nvme/fdp/fdp.o 00:02:57.355 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:57.355 CXX test/cpp_headers/md5.o 00:02:57.355 CXX test/cpp_headers/memory.o 00:02:57.355 CC test/nvme/cuse/cuse.o 00:02:57.355 LINK nvme_compliance 00:02:57.355 CXX test/cpp_headers/mmio.o 00:02:57.355 LINK fused_ordering 00:02:57.355 LINK abort 00:02:57.355 LINK doorbell_aers 00:02:57.355 CXX test/cpp_headers/nbd.o 00:02:57.355 CXX test/cpp_headers/net.o 00:02:57.355 CXX test/cpp_headers/notify.o 00:02:57.617 CXX test/cpp_headers/nvme.o 00:02:57.617 CXX test/cpp_headers/nvme_intel.o 00:02:57.617 CXX test/cpp_headers/nvme_ocssd.o 00:02:57.617 LINK fdp 00:02:57.617 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:57.617 CXX test/cpp_headers/nvme_spec.o 00:02:57.617 CXX test/cpp_headers/nvme_zns.o 00:02:57.617 CXX test/cpp_headers/nvmf_cmd.o 00:02:57.617 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:57.617 CXX test/cpp_headers/nvmf.o 00:02:57.617 CXX test/cpp_headers/nvmf_spec.o 00:02:57.617 CXX test/cpp_headers/nvmf_transport.o 00:02:57.617 CC examples/nvmf/nvmf/nvmf.o 00:02:57.617 CXX test/cpp_headers/opal.o 00:02:57.617 CXX test/cpp_headers/opal_spec.o 00:02:57.879 CXX test/cpp_headers/pci_ids.o 00:02:57.879 CXX test/cpp_headers/pipe.o 00:02:57.879 CXX test/cpp_headers/queue.o 00:02:57.879 CXX test/cpp_headers/reduce.o 00:02:57.879 CXX test/cpp_headers/rpc.o 00:02:57.879 CXX test/cpp_headers/scheduler.o 00:02:57.879 CXX test/cpp_headers/scsi.o 00:02:57.879 CXX test/cpp_headers/scsi_spec.o 00:02:57.879 CXX test/cpp_headers/sock.o 00:02:57.879 CXX test/cpp_headers/stdinc.o 00:02:57.879 CXX test/cpp_headers/string.o 00:02:57.879 CXX test/cpp_headers/thread.o 00:02:57.879 LINK nvmf 00:02:58.140 CXX test/cpp_headers/trace.o 00:02:58.140 CXX test/cpp_headers/trace_parser.o 00:02:58.140 CXX test/cpp_headers/tree.o 00:02:58.140 CXX test/cpp_headers/ublk.o 00:02:58.140 CXX test/cpp_headers/util.o 00:02:58.140 CXX test/cpp_headers/uuid.o 00:02:58.140 CXX test/cpp_headers/version.o 00:02:58.140 CXX test/cpp_headers/vfio_user_pci.o 00:02:58.140 CXX test/cpp_headers/vfio_user_spec.o 00:02:58.140 CXX test/cpp_headers/vhost.o 00:02:58.140 CXX test/cpp_headers/vmd.o 00:02:58.140 CXX test/cpp_headers/xor.o 00:02:58.140 CXX test/cpp_headers/zipf.o 00:02:58.401 LINK cuse 00:03:00.317 LINK esnap 00:03:00.317 00:03:00.317 real 1m9.309s 00:03:00.317 user 6m31.296s 00:03:00.317 sys 1m8.291s 00:03:00.317 02:50:54 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:00.317 ************************************ 00:03:00.317 END TEST make 00:03:00.317 ************************************ 00:03:00.318 02:50:54 make -- common/autotest_common.sh@10 -- $ set +x 00:03:00.318 02:50:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:00.318 02:50:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:00.318 02:50:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:00.318 02:50:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.318 02:50:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:00.318 02:50:54 -- pm/common@44 -- $ pid=5057 00:03:00.318 02:50:54 -- pm/common@50 -- $ kill -TERM 5057 00:03:00.318 02:50:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.318 02:50:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:00.318 02:50:54 -- pm/common@44 -- $ pid=5058 00:03:00.318 02:50:54 -- pm/common@50 -- $ kill -TERM 5058 00:03:00.318 02:50:54 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:00.318 02:50:54 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:00.580 02:50:54 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:00.580 02:50:54 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:00.580 02:50:54 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:00.580 02:50:54 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:00.580 02:50:54 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:00.580 02:50:54 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:00.580 02:50:54 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:00.580 02:50:54 -- scripts/common.sh@336 -- # IFS=.-: 00:03:00.580 02:50:54 -- scripts/common.sh@336 -- # read -ra ver1 00:03:00.580 02:50:54 -- scripts/common.sh@337 -- # IFS=.-: 00:03:00.580 02:50:54 -- scripts/common.sh@337 -- # read -ra ver2 00:03:00.580 02:50:54 -- scripts/common.sh@338 -- # local 'op=<' 00:03:00.580 02:50:54 -- scripts/common.sh@340 -- # ver1_l=2 00:03:00.580 02:50:54 -- scripts/common.sh@341 -- # ver2_l=1 00:03:00.580 02:50:54 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:00.580 02:50:54 -- scripts/common.sh@344 -- # case "$op" in 00:03:00.580 02:50:54 -- scripts/common.sh@345 -- # : 1 00:03:00.580 02:50:54 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:00.580 02:50:54 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.580 02:50:54 -- scripts/common.sh@365 -- # decimal 1 00:03:00.580 02:50:54 -- scripts/common.sh@353 -- # local d=1 00:03:00.580 02:50:54 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:00.580 02:50:54 -- scripts/common.sh@355 -- # echo 1 00:03:00.580 02:50:54 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:00.580 02:50:54 -- scripts/common.sh@366 -- # decimal 2 00:03:00.580 02:50:54 -- scripts/common.sh@353 -- # local d=2 00:03:00.580 02:50:54 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:00.580 02:50:54 -- scripts/common.sh@355 -- # echo 2 00:03:00.580 02:50:54 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:00.580 02:50:54 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:00.580 02:50:54 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:00.580 02:50:54 -- scripts/common.sh@368 -- # return 0 00:03:00.580 02:50:54 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:00.580 02:50:54 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.580 --rc genhtml_branch_coverage=1 00:03:00.580 --rc genhtml_function_coverage=1 00:03:00.580 --rc genhtml_legend=1 00:03:00.580 --rc geninfo_all_blocks=1 00:03:00.580 --rc geninfo_unexecuted_blocks=1 00:03:00.580 00:03:00.580 ' 00:03:00.580 02:50:54 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.580 --rc genhtml_branch_coverage=1 00:03:00.580 --rc genhtml_function_coverage=1 00:03:00.580 --rc genhtml_legend=1 00:03:00.580 --rc geninfo_all_blocks=1 00:03:00.580 --rc geninfo_unexecuted_blocks=1 00:03:00.580 00:03:00.580 ' 00:03:00.580 02:50:54 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.580 --rc genhtml_branch_coverage=1 00:03:00.580 --rc genhtml_function_coverage=1 00:03:00.580 --rc genhtml_legend=1 00:03:00.580 --rc geninfo_all_blocks=1 00:03:00.580 --rc geninfo_unexecuted_blocks=1 00:03:00.580 00:03:00.580 ' 00:03:00.580 02:50:54 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.580 --rc genhtml_branch_coverage=1 00:03:00.580 --rc genhtml_function_coverage=1 00:03:00.580 --rc genhtml_legend=1 00:03:00.580 --rc geninfo_all_blocks=1 00:03:00.580 --rc geninfo_unexecuted_blocks=1 00:03:00.580 00:03:00.580 ' 00:03:00.580 02:50:54 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:00.580 02:50:54 -- nvmf/common.sh@7 -- # uname -s 00:03:00.580 02:50:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:00.580 02:50:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:00.580 02:50:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:00.580 02:50:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:00.580 02:50:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:00.580 02:50:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:00.580 02:50:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:00.580 02:50:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:00.580 02:50:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:00.580 02:50:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:00.580 02:50:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0786ddb0-789e-43ba-ae3b-8cda27c29efa 00:03:00.580 02:50:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=0786ddb0-789e-43ba-ae3b-8cda27c29efa 00:03:00.580 02:50:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:00.580 02:50:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:00.580 02:50:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:00.580 02:50:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:00.580 02:50:54 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:00.580 02:50:54 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:00.580 02:50:54 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:00.580 02:50:54 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:00.580 02:50:54 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:00.580 02:50:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.580 02:50:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.580 02:50:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.580 02:50:54 -- paths/export.sh@5 -- # export PATH 00:03:00.580 02:50:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.580 02:50:54 -- nvmf/common.sh@51 -- # : 0 00:03:00.580 02:50:54 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:00.580 02:50:54 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:00.580 02:50:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:00.580 02:50:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:00.580 02:50:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:00.580 02:50:54 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:00.580 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:00.580 02:50:54 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:00.580 02:50:54 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:00.580 02:50:54 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:00.580 02:50:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:00.580 02:50:54 -- spdk/autotest.sh@32 -- # uname -s 00:03:00.580 02:50:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:00.580 02:50:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:00.580 02:50:54 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:00.580 02:50:54 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:00.580 02:50:54 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:00.580 02:50:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:00.580 02:50:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:00.580 02:50:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:00.580 02:50:54 -- spdk/autotest.sh@48 -- # udevadm_pid=54249 00:03:00.580 02:50:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:00.580 02:50:54 -- pm/common@17 -- # local monitor 00:03:00.580 02:50:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.580 02:50:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.580 02:50:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:00.580 02:50:54 -- pm/common@25 -- # sleep 1 00:03:00.580 02:50:54 -- pm/common@21 -- # date +%s 00:03:00.580 02:50:54 -- pm/common@21 -- # date +%s 00:03:00.580 02:50:54 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733799054 00:03:00.580 02:50:54 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733799054 00:03:00.581 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733799054_collect-cpu-load.pm.log 00:03:00.581 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733799054_collect-vmstat.pm.log 00:03:01.541 02:50:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:01.541 02:50:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:01.541 02:50:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:01.541 02:50:55 -- common/autotest_common.sh@10 -- # set +x 00:03:01.541 02:50:55 -- spdk/autotest.sh@59 -- # create_test_list 00:03:01.541 02:50:55 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:01.541 02:50:55 -- common/autotest_common.sh@10 -- # set +x 00:03:01.816 02:50:55 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:01.816 02:50:55 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:01.816 02:50:55 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:01.816 02:50:55 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:01.816 02:50:55 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:01.816 02:50:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:01.816 02:50:55 -- common/autotest_common.sh@1457 -- # uname 00:03:01.816 02:50:55 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:01.816 02:50:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:01.816 02:50:55 -- common/autotest_common.sh@1477 -- # uname 00:03:01.816 02:50:55 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:01.816 02:50:55 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:01.816 02:50:55 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:01.816 lcov: LCOV version 1.15 00:03:01.816 02:50:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:16.785 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:16.785 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:31.685 02:51:24 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:31.685 02:51:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:31.685 02:51:24 -- common/autotest_common.sh@10 -- # set +x 00:03:31.685 02:51:24 -- spdk/autotest.sh@78 -- # rm -f 00:03:31.685 02:51:24 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:31.685 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:31.685 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:31.685 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:31.685 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:31.685 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:31.685 02:51:24 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:31.685 02:51:24 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:31.685 02:51:24 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:31.685 02:51:24 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:31.685 02:51:24 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:31.685 02:51:24 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:31.685 02:51:24 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:31.685 02:51:24 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:31.685 02:51:24 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:31.685 02:51:24 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:31.685 02:51:24 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:31.685 02:51:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:31.685 02:51:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:31.685 02:51:24 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:31.685 02:51:24 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:31.685 02:51:24 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:31.685 02:51:24 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:31.685 02:51:24 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:31.685 02:51:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:31.685 02:51:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:31.685 02:51:24 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:31.685 02:51:24 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:03:31.685 02:51:24 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:31.685 02:51:24 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:03:31.685 02:51:24 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:03:31.685 02:51:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:31.685 02:51:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:31.685 02:51:24 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:31.685 02:51:24 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:03:31.685 02:51:24 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:03:31.685 02:51:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:31.685 02:51:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:31.685 02:51:24 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:31.685 02:51:24 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:03:31.685 02:51:24 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:03:31.685 02:51:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:31.685 02:51:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:31.685 02:51:24 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:31.685 02:51:24 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:03:31.685 02:51:24 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:31.685 02:51:24 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:03:31.685 02:51:24 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:03:31.685 02:51:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:31.685 02:51:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:31.685 02:51:24 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:31.685 02:51:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:31.685 02:51:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:31.686 02:51:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:31.686 02:51:24 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:31.686 02:51:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:31.686 No valid GPT data, bailing 00:03:31.686 02:51:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:31.686 02:51:24 -- scripts/common.sh@394 -- # pt= 00:03:31.686 02:51:24 -- scripts/common.sh@395 -- # return 1 00:03:31.686 02:51:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:31.686 1+0 records in 00:03:31.686 1+0 records out 00:03:31.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00994561 s, 105 MB/s 00:03:31.686 02:51:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:31.686 02:51:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:31.686 02:51:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:31.686 02:51:24 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:31.686 02:51:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:31.686 No valid GPT data, bailing 00:03:31.686 02:51:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:31.686 02:51:25 -- scripts/common.sh@394 -- # pt= 00:03:31.686 02:51:25 -- scripts/common.sh@395 -- # return 1 00:03:31.686 02:51:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:31.686 1+0 records in 00:03:31.686 1+0 records out 00:03:31.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436779 s, 240 MB/s 00:03:31.686 02:51:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:31.686 02:51:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:31.686 02:51:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:31.686 02:51:25 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:31.686 02:51:25 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:31.686 No valid GPT data, bailing 00:03:31.686 02:51:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:31.686 02:51:25 -- scripts/common.sh@394 -- # pt= 00:03:31.686 02:51:25 -- scripts/common.sh@395 -- # return 1 00:03:31.686 02:51:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:31.686 1+0 records in 00:03:31.686 1+0 records out 00:03:31.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00316531 s, 331 MB/s 00:03:31.686 02:51:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:31.686 02:51:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:31.686 02:51:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:03:31.686 02:51:25 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:03:31.686 02:51:25 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:31.686 No valid GPT data, bailing 00:03:31.686 02:51:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:31.686 02:51:25 -- scripts/common.sh@394 -- # pt= 00:03:31.686 02:51:25 -- scripts/common.sh@395 -- # return 1 00:03:31.686 02:51:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:31.686 1+0 records in 00:03:31.686 1+0 records out 00:03:31.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00317618 s, 330 MB/s 00:03:31.686 02:51:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:31.686 02:51:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:31.686 02:51:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:03:31.686 02:51:25 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:03:31.686 02:51:25 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:31.686 No valid GPT data, bailing 00:03:31.686 02:51:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:31.686 02:51:25 -- scripts/common.sh@394 -- # pt= 00:03:31.686 02:51:25 -- scripts/common.sh@395 -- # return 1 00:03:31.686 02:51:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:31.686 1+0 records in 00:03:31.686 1+0 records out 00:03:31.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0046515 s, 225 MB/s 00:03:31.686 02:51:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:31.686 02:51:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:31.686 02:51:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:31.686 02:51:25 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:31.686 02:51:25 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:31.686 No valid GPT data, bailing 00:03:31.686 02:51:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:31.686 02:51:25 -- scripts/common.sh@394 -- # pt= 00:03:31.686 02:51:25 -- scripts/common.sh@395 -- # return 1 00:03:31.686 02:51:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:31.686 1+0 records in 00:03:31.686 1+0 records out 00:03:31.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00319916 s, 328 MB/s 00:03:31.686 02:51:25 -- spdk/autotest.sh@105 -- # sync 00:03:31.686 02:51:25 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:31.686 02:51:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:31.686 02:51:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:32.632 02:51:26 -- spdk/autotest.sh@111 -- # uname -s 00:03:32.632 02:51:26 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:32.632 02:51:26 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:32.632 02:51:26 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:32.890 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:33.455 Hugepages 00:03:33.455 node hugesize free / total 00:03:33.455 node0 1048576kB 0 / 0 00:03:33.455 node0 2048kB 0 / 0 00:03:33.455 00:03:33.455 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:33.455 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:33.455 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:33.455 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:33.717 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:33.717 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:33.717 02:51:27 -- spdk/autotest.sh@117 -- # uname -s 00:03:33.717 02:51:27 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:33.717 02:51:27 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:33.717 02:51:27 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:33.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:34.556 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:34.556 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:34.556 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:34.556 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:34.556 02:51:28 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:35.930 02:51:29 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:35.930 02:51:29 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:35.930 02:51:29 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:35.930 02:51:29 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:35.930 02:51:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:35.930 02:51:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:35.930 02:51:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:35.930 02:51:29 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:35.930 02:51:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:35.930 02:51:29 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:35.930 02:51:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:35.930 02:51:29 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:35.930 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.188 Waiting for block devices as requested 00:03:36.188 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:36.188 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:36.188 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:03:36.445 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:03:41.704 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:03:41.704 02:51:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:41.704 02:51:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:41.704 02:51:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:41.704 02:51:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:41.704 02:51:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:41.704 02:51:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:41.704 02:51:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:41.704 02:51:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:41.704 02:51:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:41.704 02:51:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:41.704 02:51:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:41.704 02:51:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:41.704 02:51:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:41.704 02:51:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:41.704 02:51:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:41.704 02:51:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:41.704 02:51:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:41.704 02:51:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:41.704 02:51:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:41.704 02:51:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:41.704 02:51:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:41.704 02:51:35 -- common/autotest_common.sh@1543 -- # continue 00:03:41.704 02:51:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:41.704 02:51:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:41.704 02:51:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:41.704 02:51:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:41.704 02:51:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:41.704 02:51:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:41.704 02:51:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:41.704 02:51:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:41.704 02:51:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:41.704 02:51:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:41.704 02:51:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:41.704 02:51:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:41.704 02:51:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:41.704 02:51:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:41.704 02:51:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:41.704 02:51:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:41.704 02:51:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:41.704 02:51:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:41.704 02:51:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:41.704 02:51:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:41.704 02:51:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:41.704 02:51:35 -- common/autotest_common.sh@1543 -- # continue 00:03:41.704 02:51:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:41.704 02:51:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:03:41.704 02:51:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:41.704 02:51:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:03:41.704 02:51:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:41.704 02:51:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:03:41.704 02:51:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:41.704 02:51:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:03:41.704 02:51:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:03:41.704 02:51:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:03:41.704 02:51:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:41.704 02:51:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:03:41.704 02:51:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:41.704 02:51:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:41.704 02:51:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:41.704 02:51:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:41.704 02:51:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:03:41.704 02:51:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:41.704 02:51:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:41.704 02:51:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:41.704 02:51:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:41.704 02:51:35 -- common/autotest_common.sh@1543 -- # continue 00:03:41.704 02:51:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:41.704 02:51:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:03:41.704 02:51:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:41.704 02:51:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:03:41.704 02:51:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:41.704 02:51:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:03:41.704 02:51:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:41.704 02:51:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:03:41.704 02:51:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:03:41.704 02:51:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:03:41.704 02:51:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:03:41.704 02:51:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:41.704 02:51:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:41.705 02:51:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:41.705 02:51:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:41.705 02:51:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:41.705 02:51:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:41.705 02:51:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:03:41.705 02:51:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:41.705 02:51:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:41.705 02:51:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:41.705 02:51:35 -- common/autotest_common.sh@1543 -- # continue 00:03:41.705 02:51:35 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:41.705 02:51:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:41.705 02:51:35 -- common/autotest_common.sh@10 -- # set +x 00:03:41.705 02:51:35 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:41.705 02:51:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:41.705 02:51:35 -- common/autotest_common.sh@10 -- # set +x 00:03:41.705 02:51:35 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:41.962 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.527 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.527 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.527 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.527 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.527 02:51:36 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:42.527 02:51:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:42.527 02:51:36 -- common/autotest_common.sh@10 -- # set +x 00:03:42.527 02:51:36 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:42.527 02:51:36 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:42.527 02:51:36 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:42.527 02:51:36 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:42.527 02:51:36 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:42.527 02:51:36 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:42.527 02:51:36 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:42.527 02:51:36 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:42.527 02:51:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:42.527 02:51:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:42.527 02:51:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:42.527 02:51:36 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:42.527 02:51:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:42.785 02:51:36 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:42.785 02:51:36 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:42.785 02:51:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:42.785 02:51:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:42.785 02:51:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:42.785 02:51:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:42.785 02:51:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:42.785 02:51:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:42.785 02:51:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:42.785 02:51:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:42.785 02:51:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:42.786 02:51:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:03:42.786 02:51:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:42.786 02:51:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:42.786 02:51:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:42.786 02:51:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:03:42.786 02:51:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:42.786 02:51:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:42.786 02:51:36 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:42.786 02:51:36 -- common/autotest_common.sh@1572 -- # return 0 00:03:42.786 02:51:36 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:42.786 02:51:36 -- common/autotest_common.sh@1580 -- # return 0 00:03:42.786 02:51:36 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:42.786 02:51:36 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:42.786 02:51:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:42.786 02:51:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:42.786 02:51:36 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:42.786 02:51:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.786 02:51:36 -- common/autotest_common.sh@10 -- # set +x 00:03:42.786 02:51:36 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:42.786 02:51:36 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:42.786 02:51:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.786 02:51:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.786 02:51:36 -- common/autotest_common.sh@10 -- # set +x 00:03:42.786 ************************************ 00:03:42.786 START TEST env 00:03:42.786 ************************************ 00:03:42.786 02:51:36 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:42.786 * Looking for test storage... 00:03:42.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:42.786 02:51:37 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:42.786 02:51:37 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:42.786 02:51:37 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:42.786 02:51:37 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:42.786 02:51:37 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.786 02:51:37 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.786 02:51:37 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.786 02:51:37 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.786 02:51:37 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.786 02:51:37 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.786 02:51:37 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.786 02:51:37 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.786 02:51:37 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.786 02:51:37 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.786 02:51:37 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.786 02:51:37 env -- scripts/common.sh@344 -- # case "$op" in 00:03:42.786 02:51:37 env -- scripts/common.sh@345 -- # : 1 00:03:42.786 02:51:37 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.786 02:51:37 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.786 02:51:37 env -- scripts/common.sh@365 -- # decimal 1 00:03:42.786 02:51:37 env -- scripts/common.sh@353 -- # local d=1 00:03:42.786 02:51:37 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.786 02:51:37 env -- scripts/common.sh@355 -- # echo 1 00:03:42.786 02:51:37 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.786 02:51:37 env -- scripts/common.sh@366 -- # decimal 2 00:03:42.786 02:51:37 env -- scripts/common.sh@353 -- # local d=2 00:03:42.786 02:51:37 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.786 02:51:37 env -- scripts/common.sh@355 -- # echo 2 00:03:42.786 02:51:37 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.786 02:51:37 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.786 02:51:37 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.786 02:51:37 env -- scripts/common.sh@368 -- # return 0 00:03:42.786 02:51:37 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.786 02:51:37 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:42.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.786 --rc genhtml_branch_coverage=1 00:03:42.786 --rc genhtml_function_coverage=1 00:03:42.786 --rc genhtml_legend=1 00:03:42.786 --rc geninfo_all_blocks=1 00:03:42.786 --rc geninfo_unexecuted_blocks=1 00:03:42.786 00:03:42.786 ' 00:03:42.786 02:51:37 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:42.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.786 --rc genhtml_branch_coverage=1 00:03:42.786 --rc genhtml_function_coverage=1 00:03:42.786 --rc genhtml_legend=1 00:03:42.786 --rc geninfo_all_blocks=1 00:03:42.786 --rc geninfo_unexecuted_blocks=1 00:03:42.786 00:03:42.786 ' 00:03:42.786 02:51:37 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:42.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.786 --rc genhtml_branch_coverage=1 00:03:42.786 --rc genhtml_function_coverage=1 00:03:42.786 --rc genhtml_legend=1 00:03:42.786 --rc geninfo_all_blocks=1 00:03:42.786 --rc geninfo_unexecuted_blocks=1 00:03:42.786 00:03:42.786 ' 00:03:42.786 02:51:37 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:42.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.786 --rc genhtml_branch_coverage=1 00:03:42.786 --rc genhtml_function_coverage=1 00:03:42.786 --rc genhtml_legend=1 00:03:42.786 --rc geninfo_all_blocks=1 00:03:42.786 --rc geninfo_unexecuted_blocks=1 00:03:42.786 00:03:42.786 ' 00:03:42.786 02:51:37 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:42.786 02:51:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.786 02:51:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.786 02:51:37 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.786 ************************************ 00:03:42.786 START TEST env_memory 00:03:42.786 ************************************ 00:03:42.786 02:51:37 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:42.786 00:03:42.786 00:03:42.786 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.786 http://cunit.sourceforge.net/ 00:03:42.786 00:03:42.786 00:03:42.786 Suite: memory 00:03:42.786 Test: alloc and free memory map ...[2024-12-10 02:51:37.144807] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:43.046 passed 00:03:43.046 Test: mem map translation ...[2024-12-10 02:51:37.184390] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:43.046 [2024-12-10 02:51:37.184464] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:43.046 [2024-12-10 02:51:37.184528] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:43.046 [2024-12-10 02:51:37.184543] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:43.046 passed 00:03:43.046 Test: mem map registration ...[2024-12-10 02:51:37.252731] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:43.046 [2024-12-10 02:51:37.252794] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:43.046 passed 00:03:43.046 Test: mem map adjacent registrations ...passed 00:03:43.046 00:03:43.046 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.046 suites 1 1 n/a 0 0 00:03:43.046 tests 4 4 4 0 0 00:03:43.046 asserts 152 152 152 0 n/a 00:03:43.046 00:03:43.046 Elapsed time = 0.235 seconds 00:03:43.046 00:03:43.046 real 0m0.265s 00:03:43.046 user 0m0.242s 00:03:43.046 sys 0m0.017s 00:03:43.046 02:51:37 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:43.046 ************************************ 00:03:43.046 END TEST env_memory 00:03:43.046 02:51:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:43.046 ************************************ 00:03:43.046 02:51:37 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:43.046 02:51:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:43.046 02:51:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:43.046 02:51:37 env -- common/autotest_common.sh@10 -- # set +x 00:03:43.046 ************************************ 00:03:43.046 START TEST env_vtophys 00:03:43.047 ************************************ 00:03:43.047 02:51:37 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:43.305 EAL: lib.eal log level changed from notice to debug 00:03:43.305 EAL: Detected lcore 0 as core 0 on socket 0 00:03:43.305 EAL: Detected lcore 1 as core 0 on socket 0 00:03:43.305 EAL: Detected lcore 2 as core 0 on socket 0 00:03:43.305 EAL: Detected lcore 3 as core 0 on socket 0 00:03:43.305 EAL: Detected lcore 4 as core 0 on socket 0 00:03:43.305 EAL: Detected lcore 5 as core 0 on socket 0 00:03:43.305 EAL: Detected lcore 6 as core 0 on socket 0 00:03:43.305 EAL: Detected lcore 7 as core 0 on socket 0 00:03:43.305 EAL: Detected lcore 8 as core 0 on socket 0 00:03:43.305 EAL: Detected lcore 9 as core 0 on socket 0 00:03:43.305 EAL: Maximum logical cores by configuration: 128 00:03:43.305 EAL: Detected CPU lcores: 10 00:03:43.305 EAL: Detected NUMA nodes: 1 00:03:43.305 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:43.305 EAL: Detected shared linkage of DPDK 00:03:43.305 EAL: No shared files mode enabled, IPC will be disabled 00:03:43.305 EAL: Selected IOVA mode 'PA' 00:03:43.305 EAL: Probing VFIO support... 00:03:43.305 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:43.305 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:43.305 EAL: Ask a virtual area of 0x2e000 bytes 00:03:43.305 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:43.305 EAL: Setting up physically contiguous memory... 00:03:43.305 EAL: Setting maximum number of open files to 524288 00:03:43.305 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:43.305 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:43.305 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.305 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:43.305 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.305 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.305 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:43.305 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:43.305 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.305 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:43.305 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.305 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.305 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:43.305 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:43.305 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.305 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:43.305 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.305 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.305 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:43.305 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:43.305 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.305 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:43.305 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.305 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.305 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:43.305 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:43.305 EAL: Hugepages will be freed exactly as allocated. 00:03:43.305 EAL: No shared files mode enabled, IPC is disabled 00:03:43.305 EAL: No shared files mode enabled, IPC is disabled 00:03:43.305 EAL: TSC frequency is ~2600000 KHz 00:03:43.305 EAL: Main lcore 0 is ready (tid=7fe7c2ef1a40;cpuset=[0]) 00:03:43.305 EAL: Trying to obtain current memory policy. 00:03:43.305 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.305 EAL: Restoring previous memory policy: 0 00:03:43.305 EAL: request: mp_malloc_sync 00:03:43.305 EAL: No shared files mode enabled, IPC is disabled 00:03:43.305 EAL: Heap on socket 0 was expanded by 2MB 00:03:43.305 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:43.305 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:43.305 EAL: Mem event callback 'spdk:(nil)' registered 00:03:43.305 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:43.305 00:03:43.305 00:03:43.305 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.305 http://cunit.sourceforge.net/ 00:03:43.305 00:03:43.305 00:03:43.305 Suite: components_suite 00:03:43.564 Test: vtophys_malloc_test ...passed 00:03:43.822 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:43.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.822 EAL: Restoring previous memory policy: 4 00:03:43.822 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.822 EAL: request: mp_malloc_sync 00:03:43.822 EAL: No shared files mode enabled, IPC is disabled 00:03:43.822 EAL: Heap on socket 0 was expanded by 4MB 00:03:43.822 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.822 EAL: request: mp_malloc_sync 00:03:43.822 EAL: No shared files mode enabled, IPC is disabled 00:03:43.822 EAL: Heap on socket 0 was shrunk by 4MB 00:03:43.822 EAL: Trying to obtain current memory policy. 00:03:43.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.822 EAL: Restoring previous memory policy: 4 00:03:43.822 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.822 EAL: request: mp_malloc_sync 00:03:43.822 EAL: No shared files mode enabled, IPC is disabled 00:03:43.822 EAL: Heap on socket 0 was expanded by 6MB 00:03:43.822 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.822 EAL: request: mp_malloc_sync 00:03:43.822 EAL: No shared files mode enabled, IPC is disabled 00:03:43.822 EAL: Heap on socket 0 was shrunk by 6MB 00:03:43.822 EAL: Trying to obtain current memory policy. 00:03:43.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.822 EAL: Restoring previous memory policy: 4 00:03:43.822 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.822 EAL: request: mp_malloc_sync 00:03:43.822 EAL: No shared files mode enabled, IPC is disabled 00:03:43.822 EAL: Heap on socket 0 was expanded by 10MB 00:03:43.822 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.822 EAL: request: mp_malloc_sync 00:03:43.822 EAL: No shared files mode enabled, IPC is disabled 00:03:43.822 EAL: Heap on socket 0 was shrunk by 10MB 00:03:43.822 EAL: Trying to obtain current memory policy. 00:03:43.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.822 EAL: Restoring previous memory policy: 4 00:03:43.822 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.822 EAL: request: mp_malloc_sync 00:03:43.822 EAL: No shared files mode enabled, IPC is disabled 00:03:43.822 EAL: Heap on socket 0 was expanded by 18MB 00:03:43.822 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.822 EAL: request: mp_malloc_sync 00:03:43.822 EAL: No shared files mode enabled, IPC is disabled 00:03:43.822 EAL: Heap on socket 0 was shrunk by 18MB 00:03:43.822 EAL: Trying to obtain current memory policy. 00:03:43.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.822 EAL: Restoring previous memory policy: 4 00:03:43.822 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.822 EAL: request: mp_malloc_sync 00:03:43.822 EAL: No shared files mode enabled, IPC is disabled 00:03:43.822 EAL: Heap on socket 0 was expanded by 34MB 00:03:43.822 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.822 EAL: request: mp_malloc_sync 00:03:43.822 EAL: No shared files mode enabled, IPC is disabled 00:03:43.822 EAL: Heap on socket 0 was shrunk by 34MB 00:03:43.822 EAL: Trying to obtain current memory policy. 00:03:43.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.822 EAL: Restoring previous memory policy: 4 00:03:43.822 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.822 EAL: request: mp_malloc_sync 00:03:43.822 EAL: No shared files mode enabled, IPC is disabled 00:03:43.822 EAL: Heap on socket 0 was expanded by 66MB 00:03:43.822 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.822 EAL: request: mp_malloc_sync 00:03:43.822 EAL: No shared files mode enabled, IPC is disabled 00:03:43.822 EAL: Heap on socket 0 was shrunk by 66MB 00:03:44.080 EAL: Trying to obtain current memory policy. 00:03:44.080 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.080 EAL: Restoring previous memory policy: 4 00:03:44.080 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.080 EAL: request: mp_malloc_sync 00:03:44.080 EAL: No shared files mode enabled, IPC is disabled 00:03:44.080 EAL: Heap on socket 0 was expanded by 130MB 00:03:44.080 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.080 EAL: request: mp_malloc_sync 00:03:44.080 EAL: No shared files mode enabled, IPC is disabled 00:03:44.081 EAL: Heap on socket 0 was shrunk by 130MB 00:03:44.338 EAL: Trying to obtain current memory policy. 00:03:44.339 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.339 EAL: Restoring previous memory policy: 4 00:03:44.339 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.339 EAL: request: mp_malloc_sync 00:03:44.339 EAL: No shared files mode enabled, IPC is disabled 00:03:44.339 EAL: Heap on socket 0 was expanded by 258MB 00:03:44.596 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.596 EAL: request: mp_malloc_sync 00:03:44.596 EAL: No shared files mode enabled, IPC is disabled 00:03:44.596 EAL: Heap on socket 0 was shrunk by 258MB 00:03:44.855 EAL: Trying to obtain current memory policy. 00:03:44.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.112 EAL: Restoring previous memory policy: 4 00:03:45.112 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.112 EAL: request: mp_malloc_sync 00:03:45.112 EAL: No shared files mode enabled, IPC is disabled 00:03:45.112 EAL: Heap on socket 0 was expanded by 514MB 00:03:45.717 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.717 EAL: request: mp_malloc_sync 00:03:45.717 EAL: No shared files mode enabled, IPC is disabled 00:03:45.717 EAL: Heap on socket 0 was shrunk by 514MB 00:03:46.304 EAL: Trying to obtain current memory policy. 00:03:46.304 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:46.304 EAL: Restoring previous memory policy: 4 00:03:46.304 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.304 EAL: request: mp_malloc_sync 00:03:46.304 EAL: No shared files mode enabled, IPC is disabled 00:03:46.304 EAL: Heap on socket 0 was expanded by 1026MB 00:03:47.677 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.677 EAL: request: mp_malloc_sync 00:03:47.677 EAL: No shared files mode enabled, IPC is disabled 00:03:47.677 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:48.673 passed 00:03:48.673 00:03:48.673 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.673 suites 1 1 n/a 0 0 00:03:48.673 tests 2 2 2 0 0 00:03:48.673 asserts 5845 5845 5845 0 n/a 00:03:48.673 00:03:48.673 Elapsed time = 5.349 seconds 00:03:48.673 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.673 EAL: request: mp_malloc_sync 00:03:48.673 EAL: No shared files mode enabled, IPC is disabled 00:03:48.673 EAL: Heap on socket 0 was shrunk by 2MB 00:03:48.673 EAL: No shared files mode enabled, IPC is disabled 00:03:48.673 EAL: No shared files mode enabled, IPC is disabled 00:03:48.673 EAL: No shared files mode enabled, IPC is disabled 00:03:48.673 ************************************ 00:03:48.673 END TEST env_vtophys 00:03:48.673 ************************************ 00:03:48.673 00:03:48.673 real 0m5.629s 00:03:48.673 user 0m4.759s 00:03:48.673 sys 0m0.710s 00:03:48.673 02:51:43 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.673 02:51:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:48.932 02:51:43 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:48.932 02:51:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.932 02:51:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.932 02:51:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.932 ************************************ 00:03:48.932 START TEST env_pci 00:03:48.932 ************************************ 00:03:48.932 02:51:43 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:48.932 00:03:48.932 00:03:48.932 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.932 http://cunit.sourceforge.net/ 00:03:48.932 00:03:48.932 00:03:48.932 Suite: pci 00:03:48.932 Test: pci_hook ...[2024-12-10 02:51:43.124471] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56980 has claimed it 00:03:48.932 EAL: Cannot find device (10000:00:01.0) 00:03:48.932 passed 00:03:48.932 00:03:48.932 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.932 suites 1 1 n/a 0 0 00:03:48.932 tests 1 1 1 0 0 00:03:48.932 asserts 25 25 25 0 n/a 00:03:48.932 00:03:48.932 Elapsed time = 0.006 seconds 00:03:48.932 EAL: Failed to attach device on primary process 00:03:48.932 00:03:48.932 real 0m0.066s 00:03:48.932 user 0m0.026s 00:03:48.932 sys 0m0.039s 00:03:48.932 02:51:43 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.932 ************************************ 00:03:48.932 END TEST env_pci 00:03:48.932 02:51:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:48.932 ************************************ 00:03:48.932 02:51:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:48.932 02:51:43 env -- env/env.sh@15 -- # uname 00:03:48.932 02:51:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:48.932 02:51:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:48.932 02:51:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:48.932 02:51:43 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:48.932 02:51:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.932 02:51:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.932 ************************************ 00:03:48.932 START TEST env_dpdk_post_init 00:03:48.932 ************************************ 00:03:48.932 02:51:43 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:48.932 EAL: Detected CPU lcores: 10 00:03:48.932 EAL: Detected NUMA nodes: 1 00:03:48.932 EAL: Detected shared linkage of DPDK 00:03:48.932 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:48.932 EAL: Selected IOVA mode 'PA' 00:03:49.190 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:49.190 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:49.190 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:49.190 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:03:49.190 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:03:49.190 Starting DPDK initialization... 00:03:49.190 Starting SPDK post initialization... 00:03:49.190 SPDK NVMe probe 00:03:49.190 Attaching to 0000:00:10.0 00:03:49.190 Attaching to 0000:00:11.0 00:03:49.190 Attaching to 0000:00:12.0 00:03:49.190 Attaching to 0000:00:13.0 00:03:49.190 Attached to 0000:00:13.0 00:03:49.190 Attached to 0000:00:10.0 00:03:49.190 Attached to 0000:00:11.0 00:03:49.190 Attached to 0000:00:12.0 00:03:49.190 Cleaning up... 00:03:49.190 00:03:49.190 real 0m0.254s 00:03:49.190 user 0m0.077s 00:03:49.190 sys 0m0.077s 00:03:49.190 02:51:43 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.190 02:51:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:49.190 ************************************ 00:03:49.190 END TEST env_dpdk_post_init 00:03:49.190 ************************************ 00:03:49.190 02:51:43 env -- env/env.sh@26 -- # uname 00:03:49.190 02:51:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:49.190 02:51:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:49.190 02:51:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.191 02:51:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.191 02:51:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.191 ************************************ 00:03:49.191 START TEST env_mem_callbacks 00:03:49.191 ************************************ 00:03:49.191 02:51:43 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:49.449 EAL: Detected CPU lcores: 10 00:03:49.449 EAL: Detected NUMA nodes: 1 00:03:49.449 EAL: Detected shared linkage of DPDK 00:03:49.449 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:49.449 EAL: Selected IOVA mode 'PA' 00:03:49.449 00:03:49.449 00:03:49.449 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.449 http://cunit.sourceforge.net/ 00:03:49.449 00:03:49.449 00:03:49.449 Suite: memory 00:03:49.449 Test: test ... 00:03:49.449 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:49.449 register 0x200000200000 2097152 00:03:49.449 malloc 3145728 00:03:49.449 register 0x200000400000 4194304 00:03:49.449 buf 0x2000004fffc0 len 3145728 PASSED 00:03:49.449 malloc 64 00:03:49.449 buf 0x2000004ffec0 len 64 PASSED 00:03:49.449 malloc 4194304 00:03:49.449 register 0x200000800000 6291456 00:03:49.449 buf 0x2000009fffc0 len 4194304 PASSED 00:03:49.449 free 0x2000004fffc0 3145728 00:03:49.449 free 0x2000004ffec0 64 00:03:49.449 unregister 0x200000400000 4194304 PASSED 00:03:49.449 free 0x2000009fffc0 4194304 00:03:49.449 unregister 0x200000800000 6291456 PASSED 00:03:49.449 malloc 8388608 00:03:49.449 register 0x200000400000 10485760 00:03:49.449 buf 0x2000005fffc0 len 8388608 PASSED 00:03:49.449 free 0x2000005fffc0 8388608 00:03:49.449 unregister 0x200000400000 10485760 PASSED 00:03:49.449 passed 00:03:49.449 00:03:49.449 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.449 suites 1 1 n/a 0 0 00:03:49.449 tests 1 1 1 0 0 00:03:49.449 asserts 15 15 15 0 n/a 00:03:49.449 00:03:49.449 Elapsed time = 0.047 seconds 00:03:49.449 00:03:49.449 real 0m0.239s 00:03:49.449 user 0m0.075s 00:03:49.449 sys 0m0.060s 00:03:49.449 02:51:43 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.449 02:51:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:49.449 ************************************ 00:03:49.449 END TEST env_mem_callbacks 00:03:49.449 ************************************ 00:03:49.707 00:03:49.707 real 0m6.900s 00:03:49.707 user 0m5.320s 00:03:49.707 sys 0m1.103s 00:03:49.707 02:51:43 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.707 ************************************ 00:03:49.707 END TEST env 00:03:49.707 ************************************ 00:03:49.707 02:51:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.707 02:51:43 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:49.707 02:51:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.707 02:51:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.707 02:51:43 -- common/autotest_common.sh@10 -- # set +x 00:03:49.707 ************************************ 00:03:49.707 START TEST rpc 00:03:49.707 ************************************ 00:03:49.707 02:51:43 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:49.707 * Looking for test storage... 00:03:49.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:49.707 02:51:43 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:49.707 02:51:43 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:49.707 02:51:43 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:49.707 02:51:44 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:49.707 02:51:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.707 02:51:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.707 02:51:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.707 02:51:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.707 02:51:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.707 02:51:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.707 02:51:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.707 02:51:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.707 02:51:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.707 02:51:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.707 02:51:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.707 02:51:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:49.707 02:51:44 rpc -- scripts/common.sh@345 -- # : 1 00:03:49.707 02:51:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.707 02:51:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.707 02:51:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:49.707 02:51:44 rpc -- scripts/common.sh@353 -- # local d=1 00:03:49.707 02:51:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.707 02:51:44 rpc -- scripts/common.sh@355 -- # echo 1 00:03:49.707 02:51:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.707 02:51:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:49.707 02:51:44 rpc -- scripts/common.sh@353 -- # local d=2 00:03:49.707 02:51:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.707 02:51:44 rpc -- scripts/common.sh@355 -- # echo 2 00:03:49.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:49.707 02:51:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.707 02:51:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.707 02:51:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.707 02:51:44 rpc -- scripts/common.sh@368 -- # return 0 00:03:49.707 02:51:44 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.707 02:51:44 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:49.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.707 --rc genhtml_branch_coverage=1 00:03:49.707 --rc genhtml_function_coverage=1 00:03:49.707 --rc genhtml_legend=1 00:03:49.707 --rc geninfo_all_blocks=1 00:03:49.707 --rc geninfo_unexecuted_blocks=1 00:03:49.707 00:03:49.707 ' 00:03:49.708 02:51:44 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.708 --rc genhtml_branch_coverage=1 00:03:49.708 --rc genhtml_function_coverage=1 00:03:49.708 --rc genhtml_legend=1 00:03:49.708 --rc geninfo_all_blocks=1 00:03:49.708 --rc geninfo_unexecuted_blocks=1 00:03:49.708 00:03:49.708 ' 00:03:49.708 02:51:44 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.708 --rc genhtml_branch_coverage=1 00:03:49.708 --rc genhtml_function_coverage=1 00:03:49.708 --rc genhtml_legend=1 00:03:49.708 --rc geninfo_all_blocks=1 00:03:49.708 --rc geninfo_unexecuted_blocks=1 00:03:49.708 00:03:49.708 ' 00:03:49.708 02:51:44 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.708 --rc genhtml_branch_coverage=1 00:03:49.708 --rc genhtml_function_coverage=1 00:03:49.708 --rc genhtml_legend=1 00:03:49.708 --rc geninfo_all_blocks=1 00:03:49.708 --rc geninfo_unexecuted_blocks=1 00:03:49.708 00:03:49.708 ' 00:03:49.708 02:51:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57107 00:03:49.708 02:51:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.708 02:51:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57107 00:03:49.708 02:51:44 rpc -- common/autotest_common.sh@835 -- # '[' -z 57107 ']' 00:03:49.708 02:51:44 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:49.708 02:51:44 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:49.708 02:51:44 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:49.708 02:51:44 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:49.708 02:51:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.708 02:51:44 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:49.965 [2024-12-10 02:51:44.104168] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:03:49.965 [2024-12-10 02:51:44.104296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57107 ] 00:03:49.965 [2024-12-10 02:51:44.263850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.223 [2024-12-10 02:51:44.366496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:50.223 [2024-12-10 02:51:44.366553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57107' to capture a snapshot of events at runtime. 00:03:50.223 [2024-12-10 02:51:44.366564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:50.223 [2024-12-10 02:51:44.366573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:50.223 [2024-12-10 02:51:44.366581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57107 for offline analysis/debug. 00:03:50.223 [2024-12-10 02:51:44.367454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.788 02:51:45 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:50.788 02:51:45 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:50.788 02:51:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:50.788 02:51:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:50.788 02:51:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:50.788 02:51:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:50.788 02:51:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.788 02:51:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.788 02:51:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.788 ************************************ 00:03:50.788 START TEST rpc_integrity 00:03:50.788 ************************************ 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:50.788 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.788 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:50.788 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:50.788 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:50.788 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.788 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:50.788 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.788 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:50.788 { 00:03:50.788 "name": "Malloc0", 00:03:50.788 "aliases": [ 00:03:50.788 "54ebe468-ecc1-4b45-b806-846fac5b8982" 00:03:50.788 ], 00:03:50.788 "product_name": "Malloc disk", 00:03:50.788 "block_size": 512, 00:03:50.788 "num_blocks": 16384, 00:03:50.788 "uuid": "54ebe468-ecc1-4b45-b806-846fac5b8982", 00:03:50.788 "assigned_rate_limits": { 00:03:50.788 "rw_ios_per_sec": 0, 00:03:50.788 "rw_mbytes_per_sec": 0, 00:03:50.788 "r_mbytes_per_sec": 0, 00:03:50.788 "w_mbytes_per_sec": 0 00:03:50.788 }, 00:03:50.788 "claimed": false, 00:03:50.788 "zoned": false, 00:03:50.788 "supported_io_types": { 00:03:50.788 "read": true, 00:03:50.788 "write": true, 00:03:50.788 "unmap": true, 00:03:50.788 "flush": true, 00:03:50.788 "reset": true, 00:03:50.788 "nvme_admin": false, 00:03:50.788 "nvme_io": false, 00:03:50.788 "nvme_io_md": false, 00:03:50.788 "write_zeroes": true, 00:03:50.788 "zcopy": true, 00:03:50.788 "get_zone_info": false, 00:03:50.788 "zone_management": false, 00:03:50.788 "zone_append": false, 00:03:50.788 "compare": false, 00:03:50.788 "compare_and_write": false, 00:03:50.788 "abort": true, 00:03:50.788 "seek_hole": false, 00:03:50.788 "seek_data": false, 00:03:50.788 "copy": true, 00:03:50.788 "nvme_iov_md": false 00:03:50.788 }, 00:03:50.788 "memory_domains": [ 00:03:50.788 { 00:03:50.788 "dma_device_id": "system", 00:03:50.788 "dma_device_type": 1 00:03:50.788 }, 00:03:50.788 { 00:03:50.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.788 "dma_device_type": 2 00:03:50.788 } 00:03:50.788 ], 00:03:50.788 "driver_specific": {} 00:03:50.788 } 00:03:50.788 ]' 00:03:50.788 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:50.788 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:50.788 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.788 [2024-12-10 02:51:45.140816] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:50.788 [2024-12-10 02:51:45.140994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:50.788 [2024-12-10 02:51:45.141030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:03:50.788 [2024-12-10 02:51:45.141044] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:50.788 [2024-12-10 02:51:45.143292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:50.788 [2024-12-10 02:51:45.143340] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:50.788 Passthru0 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.788 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.788 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.788 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:50.788 { 00:03:50.788 "name": "Malloc0", 00:03:50.788 "aliases": [ 00:03:50.788 "54ebe468-ecc1-4b45-b806-846fac5b8982" 00:03:50.788 ], 00:03:50.788 "product_name": "Malloc disk", 00:03:50.788 "block_size": 512, 00:03:50.788 "num_blocks": 16384, 00:03:50.788 "uuid": "54ebe468-ecc1-4b45-b806-846fac5b8982", 00:03:50.788 "assigned_rate_limits": { 00:03:50.788 "rw_ios_per_sec": 0, 00:03:50.788 "rw_mbytes_per_sec": 0, 00:03:50.788 "r_mbytes_per_sec": 0, 00:03:50.788 "w_mbytes_per_sec": 0 00:03:50.788 }, 00:03:50.788 "claimed": true, 00:03:50.788 "claim_type": "exclusive_write", 00:03:50.788 "zoned": false, 00:03:50.788 "supported_io_types": { 00:03:50.788 "read": true, 00:03:50.788 "write": true, 00:03:50.788 "unmap": true, 00:03:50.788 "flush": true, 00:03:50.788 "reset": true, 00:03:50.788 "nvme_admin": false, 00:03:50.788 "nvme_io": false, 00:03:50.788 "nvme_io_md": false, 00:03:50.788 "write_zeroes": true, 00:03:50.788 "zcopy": true, 00:03:50.788 "get_zone_info": false, 00:03:50.788 "zone_management": false, 00:03:50.788 "zone_append": false, 00:03:50.788 "compare": false, 00:03:50.788 "compare_and_write": false, 00:03:50.788 "abort": true, 00:03:50.788 "seek_hole": false, 00:03:50.788 "seek_data": false, 00:03:50.788 "copy": true, 00:03:50.788 "nvme_iov_md": false 00:03:50.788 }, 00:03:50.788 "memory_domains": [ 00:03:50.788 { 00:03:50.788 "dma_device_id": "system", 00:03:50.788 "dma_device_type": 1 00:03:50.788 }, 00:03:50.788 { 00:03:50.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.788 "dma_device_type": 2 00:03:50.788 } 00:03:50.788 ], 00:03:50.788 "driver_specific": {} 00:03:50.788 }, 00:03:50.788 { 00:03:50.788 "name": "Passthru0", 00:03:50.788 "aliases": [ 00:03:50.788 "ccf85cb5-eee2-5a06-8ab4-d8e1940d1a97" 00:03:50.788 ], 00:03:50.788 "product_name": "passthru", 00:03:50.788 "block_size": 512, 00:03:50.788 "num_blocks": 16384, 00:03:50.788 "uuid": "ccf85cb5-eee2-5a06-8ab4-d8e1940d1a97", 00:03:50.788 "assigned_rate_limits": { 00:03:50.788 "rw_ios_per_sec": 0, 00:03:50.788 "rw_mbytes_per_sec": 0, 00:03:50.788 "r_mbytes_per_sec": 0, 00:03:50.788 "w_mbytes_per_sec": 0 00:03:50.788 }, 00:03:50.788 "claimed": false, 00:03:50.788 "zoned": false, 00:03:50.788 "supported_io_types": { 00:03:50.788 "read": true, 00:03:50.788 "write": true, 00:03:50.788 "unmap": true, 00:03:50.788 "flush": true, 00:03:50.788 "reset": true, 00:03:50.788 "nvme_admin": false, 00:03:50.788 "nvme_io": false, 00:03:50.788 "nvme_io_md": false, 00:03:50.788 "write_zeroes": true, 00:03:50.788 "zcopy": true, 00:03:50.788 "get_zone_info": false, 00:03:50.788 "zone_management": false, 00:03:50.788 "zone_append": false, 00:03:50.788 "compare": false, 00:03:50.788 "compare_and_write": false, 00:03:50.788 "abort": true, 00:03:50.788 "seek_hole": false, 00:03:50.788 "seek_data": false, 00:03:50.788 "copy": true, 00:03:50.788 "nvme_iov_md": false 00:03:50.788 }, 00:03:50.788 "memory_domains": [ 00:03:50.788 { 00:03:50.788 "dma_device_id": "system", 00:03:50.788 "dma_device_type": 1 00:03:50.788 }, 00:03:50.788 { 00:03:50.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.788 "dma_device_type": 2 00:03:50.788 } 00:03:50.788 ], 00:03:50.788 "driver_specific": { 00:03:50.788 "passthru": { 00:03:50.788 "name": "Passthru0", 00:03:50.788 "base_bdev_name": "Malloc0" 00:03:50.788 } 00:03:50.788 } 00:03:50.788 } 00:03:50.788 ]' 00:03:50.789 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:51.046 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:51.046 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:51.046 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.046 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.046 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.046 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:51.046 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.046 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.046 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.046 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:51.046 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.046 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.046 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.046 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:51.046 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:51.046 ************************************ 00:03:51.046 END TEST rpc_integrity 00:03:51.046 ************************************ 00:03:51.046 02:51:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:51.046 00:03:51.046 real 0m0.237s 00:03:51.046 user 0m0.123s 00:03:51.046 sys 0m0.020s 00:03:51.046 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.046 02:51:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.046 02:51:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:51.046 02:51:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.046 02:51:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.046 02:51:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.046 ************************************ 00:03:51.046 START TEST rpc_plugins 00:03:51.046 ************************************ 00:03:51.046 02:51:45 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:51.046 02:51:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:51.046 02:51:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.046 02:51:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.046 02:51:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.046 02:51:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:51.046 02:51:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:51.046 02:51:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.046 02:51:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.046 02:51:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.046 02:51:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:51.046 { 00:03:51.046 "name": "Malloc1", 00:03:51.046 "aliases": [ 00:03:51.046 "b230ec56-1655-43aa-8490-f2365532ec8d" 00:03:51.046 ], 00:03:51.046 "product_name": "Malloc disk", 00:03:51.046 "block_size": 4096, 00:03:51.046 "num_blocks": 256, 00:03:51.046 "uuid": "b230ec56-1655-43aa-8490-f2365532ec8d", 00:03:51.046 "assigned_rate_limits": { 00:03:51.046 "rw_ios_per_sec": 0, 00:03:51.046 "rw_mbytes_per_sec": 0, 00:03:51.046 "r_mbytes_per_sec": 0, 00:03:51.046 "w_mbytes_per_sec": 0 00:03:51.046 }, 00:03:51.046 "claimed": false, 00:03:51.046 "zoned": false, 00:03:51.046 "supported_io_types": { 00:03:51.046 "read": true, 00:03:51.046 "write": true, 00:03:51.046 "unmap": true, 00:03:51.046 "flush": true, 00:03:51.046 "reset": true, 00:03:51.046 "nvme_admin": false, 00:03:51.046 "nvme_io": false, 00:03:51.046 "nvme_io_md": false, 00:03:51.046 "write_zeroes": true, 00:03:51.046 "zcopy": true, 00:03:51.046 "get_zone_info": false, 00:03:51.046 "zone_management": false, 00:03:51.046 "zone_append": false, 00:03:51.046 "compare": false, 00:03:51.046 "compare_and_write": false, 00:03:51.046 "abort": true, 00:03:51.046 "seek_hole": false, 00:03:51.046 "seek_data": false, 00:03:51.046 "copy": true, 00:03:51.046 "nvme_iov_md": false 00:03:51.046 }, 00:03:51.046 "memory_domains": [ 00:03:51.046 { 00:03:51.046 "dma_device_id": "system", 00:03:51.046 "dma_device_type": 1 00:03:51.046 }, 00:03:51.046 { 00:03:51.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.046 "dma_device_type": 2 00:03:51.046 } 00:03:51.046 ], 00:03:51.046 "driver_specific": {} 00:03:51.046 } 00:03:51.046 ]' 00:03:51.046 02:51:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:51.046 02:51:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:51.046 02:51:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:51.046 02:51:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.046 02:51:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.046 02:51:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.046 02:51:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:51.046 02:51:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.046 02:51:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.046 02:51:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.046 02:51:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:51.046 02:51:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:51.305 ************************************ 00:03:51.305 END TEST rpc_plugins 00:03:51.305 ************************************ 00:03:51.305 02:51:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:51.305 00:03:51.305 real 0m0.121s 00:03:51.305 user 0m0.067s 00:03:51.305 sys 0m0.013s 00:03:51.305 02:51:45 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.305 02:51:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.305 02:51:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:51.305 02:51:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.305 02:51:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.305 02:51:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.305 ************************************ 00:03:51.305 START TEST rpc_trace_cmd_test 00:03:51.305 ************************************ 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:51.305 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57107", 00:03:51.305 "tpoint_group_mask": "0x8", 00:03:51.305 "iscsi_conn": { 00:03:51.305 "mask": "0x2", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 }, 00:03:51.305 "scsi": { 00:03:51.305 "mask": "0x4", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 }, 00:03:51.305 "bdev": { 00:03:51.305 "mask": "0x8", 00:03:51.305 "tpoint_mask": "0xffffffffffffffff" 00:03:51.305 }, 00:03:51.305 "nvmf_rdma": { 00:03:51.305 "mask": "0x10", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 }, 00:03:51.305 "nvmf_tcp": { 00:03:51.305 "mask": "0x20", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 }, 00:03:51.305 "ftl": { 00:03:51.305 "mask": "0x40", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 }, 00:03:51.305 "blobfs": { 00:03:51.305 "mask": "0x80", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 }, 00:03:51.305 "dsa": { 00:03:51.305 "mask": "0x200", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 }, 00:03:51.305 "thread": { 00:03:51.305 "mask": "0x400", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 }, 00:03:51.305 "nvme_pcie": { 00:03:51.305 "mask": "0x800", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 }, 00:03:51.305 "iaa": { 00:03:51.305 "mask": "0x1000", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 }, 00:03:51.305 "nvme_tcp": { 00:03:51.305 "mask": "0x2000", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 }, 00:03:51.305 "bdev_nvme": { 00:03:51.305 "mask": "0x4000", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 }, 00:03:51.305 "sock": { 00:03:51.305 "mask": "0x8000", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 }, 00:03:51.305 "blob": { 00:03:51.305 "mask": "0x10000", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 }, 00:03:51.305 "bdev_raid": { 00:03:51.305 "mask": "0x20000", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 }, 00:03:51.305 "scheduler": { 00:03:51.305 "mask": "0x40000", 00:03:51.305 "tpoint_mask": "0x0" 00:03:51.305 } 00:03:51.305 }' 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:51.305 02:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:51.564 ************************************ 00:03:51.564 02:51:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:51.564 00:03:51.564 real 0m0.186s 00:03:51.564 user 0m0.146s 00:03:51.564 sys 0m0.029s 00:03:51.564 02:51:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.564 02:51:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:51.564 END TEST rpc_trace_cmd_test 00:03:51.564 ************************************ 00:03:51.564 02:51:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:51.564 02:51:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:51.564 02:51:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:51.564 02:51:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.564 02:51:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.564 02:51:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.564 ************************************ 00:03:51.564 START TEST rpc_daemon_integrity 00:03:51.564 ************************************ 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.564 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:51.564 { 00:03:51.564 "name": "Malloc2", 00:03:51.564 "aliases": [ 00:03:51.564 "e74340c1-9307-4513-b814-5bf2f2453ae8" 00:03:51.564 ], 00:03:51.564 "product_name": "Malloc disk", 00:03:51.564 "block_size": 512, 00:03:51.564 "num_blocks": 16384, 00:03:51.564 "uuid": "e74340c1-9307-4513-b814-5bf2f2453ae8", 00:03:51.564 "assigned_rate_limits": { 00:03:51.564 "rw_ios_per_sec": 0, 00:03:51.564 "rw_mbytes_per_sec": 0, 00:03:51.564 "r_mbytes_per_sec": 0, 00:03:51.564 "w_mbytes_per_sec": 0 00:03:51.564 }, 00:03:51.564 "claimed": false, 00:03:51.564 "zoned": false, 00:03:51.564 "supported_io_types": { 00:03:51.564 "read": true, 00:03:51.564 "write": true, 00:03:51.564 "unmap": true, 00:03:51.564 "flush": true, 00:03:51.564 "reset": true, 00:03:51.564 "nvme_admin": false, 00:03:51.564 "nvme_io": false, 00:03:51.564 "nvme_io_md": false, 00:03:51.564 "write_zeroes": true, 00:03:51.564 "zcopy": true, 00:03:51.564 "get_zone_info": false, 00:03:51.564 "zone_management": false, 00:03:51.564 "zone_append": false, 00:03:51.564 "compare": false, 00:03:51.564 "compare_and_write": false, 00:03:51.564 "abort": true, 00:03:51.564 "seek_hole": false, 00:03:51.564 "seek_data": false, 00:03:51.564 "copy": true, 00:03:51.564 "nvme_iov_md": false 00:03:51.564 }, 00:03:51.564 "memory_domains": [ 00:03:51.564 { 00:03:51.564 "dma_device_id": "system", 00:03:51.564 "dma_device_type": 1 00:03:51.564 }, 00:03:51.564 { 00:03:51.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.564 "dma_device_type": 2 00:03:51.564 } 00:03:51.564 ], 00:03:51.564 "driver_specific": {} 00:03:51.564 } 00:03:51.564 ]' 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.565 [2024-12-10 02:51:45.865220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:51.565 [2024-12-10 02:51:45.865302] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:51.565 [2024-12-10 02:51:45.865332] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:03:51.565 [2024-12-10 02:51:45.865347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:51.565 [2024-12-10 02:51:45.868403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:51.565 [2024-12-10 02:51:45.868460] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:51.565 Passthru0 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:51.565 { 00:03:51.565 "name": "Malloc2", 00:03:51.565 "aliases": [ 00:03:51.565 "e74340c1-9307-4513-b814-5bf2f2453ae8" 00:03:51.565 ], 00:03:51.565 "product_name": "Malloc disk", 00:03:51.565 "block_size": 512, 00:03:51.565 "num_blocks": 16384, 00:03:51.565 "uuid": "e74340c1-9307-4513-b814-5bf2f2453ae8", 00:03:51.565 "assigned_rate_limits": { 00:03:51.565 "rw_ios_per_sec": 0, 00:03:51.565 "rw_mbytes_per_sec": 0, 00:03:51.565 "r_mbytes_per_sec": 0, 00:03:51.565 "w_mbytes_per_sec": 0 00:03:51.565 }, 00:03:51.565 "claimed": true, 00:03:51.565 "claim_type": "exclusive_write", 00:03:51.565 "zoned": false, 00:03:51.565 "supported_io_types": { 00:03:51.565 "read": true, 00:03:51.565 "write": true, 00:03:51.565 "unmap": true, 00:03:51.565 "flush": true, 00:03:51.565 "reset": true, 00:03:51.565 "nvme_admin": false, 00:03:51.565 "nvme_io": false, 00:03:51.565 "nvme_io_md": false, 00:03:51.565 "write_zeroes": true, 00:03:51.565 "zcopy": true, 00:03:51.565 "get_zone_info": false, 00:03:51.565 "zone_management": false, 00:03:51.565 "zone_append": false, 00:03:51.565 "compare": false, 00:03:51.565 "compare_and_write": false, 00:03:51.565 "abort": true, 00:03:51.565 "seek_hole": false, 00:03:51.565 "seek_data": false, 00:03:51.565 "copy": true, 00:03:51.565 "nvme_iov_md": false 00:03:51.565 }, 00:03:51.565 "memory_domains": [ 00:03:51.565 { 00:03:51.565 "dma_device_id": "system", 00:03:51.565 "dma_device_type": 1 00:03:51.565 }, 00:03:51.565 { 00:03:51.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.565 "dma_device_type": 2 00:03:51.565 } 00:03:51.565 ], 00:03:51.565 "driver_specific": {} 00:03:51.565 }, 00:03:51.565 { 00:03:51.565 "name": "Passthru0", 00:03:51.565 "aliases": [ 00:03:51.565 "78ac5771-7c7b-5b2a-8c1b-874ef092856f" 00:03:51.565 ], 00:03:51.565 "product_name": "passthru", 00:03:51.565 "block_size": 512, 00:03:51.565 "num_blocks": 16384, 00:03:51.565 "uuid": "78ac5771-7c7b-5b2a-8c1b-874ef092856f", 00:03:51.565 "assigned_rate_limits": { 00:03:51.565 "rw_ios_per_sec": 0, 00:03:51.565 "rw_mbytes_per_sec": 0, 00:03:51.565 "r_mbytes_per_sec": 0, 00:03:51.565 "w_mbytes_per_sec": 0 00:03:51.565 }, 00:03:51.565 "claimed": false, 00:03:51.565 "zoned": false, 00:03:51.565 "supported_io_types": { 00:03:51.565 "read": true, 00:03:51.565 "write": true, 00:03:51.565 "unmap": true, 00:03:51.565 "flush": true, 00:03:51.565 "reset": true, 00:03:51.565 "nvme_admin": false, 00:03:51.565 "nvme_io": false, 00:03:51.565 "nvme_io_md": false, 00:03:51.565 "write_zeroes": true, 00:03:51.565 "zcopy": true, 00:03:51.565 "get_zone_info": false, 00:03:51.565 "zone_management": false, 00:03:51.565 "zone_append": false, 00:03:51.565 "compare": false, 00:03:51.565 "compare_and_write": false, 00:03:51.565 "abort": true, 00:03:51.565 "seek_hole": false, 00:03:51.565 "seek_data": false, 00:03:51.565 "copy": true, 00:03:51.565 "nvme_iov_md": false 00:03:51.565 }, 00:03:51.565 "memory_domains": [ 00:03:51.565 { 00:03:51.565 "dma_device_id": "system", 00:03:51.565 "dma_device_type": 1 00:03:51.565 }, 00:03:51.565 { 00:03:51.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.565 "dma_device_type": 2 00:03:51.565 } 00:03:51.565 ], 00:03:51.565 "driver_specific": { 00:03:51.565 "passthru": { 00:03:51.565 "name": "Passthru0", 00:03:51.565 "base_bdev_name": "Malloc2" 00:03:51.565 } 00:03:51.565 } 00:03:51.565 } 00:03:51.565 ]' 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.565 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.823 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.823 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:51.823 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.823 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.823 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.823 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:51.823 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:51.823 ************************************ 00:03:51.823 END TEST rpc_daemon_integrity 00:03:51.823 ************************************ 00:03:51.823 02:51:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:51.823 00:03:51.823 real 0m0.248s 00:03:51.823 user 0m0.127s 00:03:51.823 sys 0m0.032s 00:03:51.823 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.823 02:51:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.823 02:51:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:51.823 02:51:46 rpc -- rpc/rpc.sh@84 -- # killprocess 57107 00:03:51.823 02:51:46 rpc -- common/autotest_common.sh@954 -- # '[' -z 57107 ']' 00:03:51.823 02:51:46 rpc -- common/autotest_common.sh@958 -- # kill -0 57107 00:03:51.823 02:51:46 rpc -- common/autotest_common.sh@959 -- # uname 00:03:51.823 02:51:46 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:51.823 02:51:46 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57107 00:03:51.823 killing process with pid 57107 00:03:51.823 02:51:46 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:51.823 02:51:46 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:51.823 02:51:46 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57107' 00:03:51.823 02:51:46 rpc -- common/autotest_common.sh@973 -- # kill 57107 00:03:51.823 02:51:46 rpc -- common/autotest_common.sh@978 -- # wait 57107 00:03:53.740 00:03:53.740 real 0m3.754s 00:03:53.740 user 0m4.188s 00:03:53.740 sys 0m0.580s 00:03:53.740 02:51:47 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.740 02:51:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.740 ************************************ 00:03:53.740 END TEST rpc 00:03:53.740 ************************************ 00:03:53.740 02:51:47 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:53.740 02:51:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.740 02:51:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.740 02:51:47 -- common/autotest_common.sh@10 -- # set +x 00:03:53.740 ************************************ 00:03:53.740 START TEST skip_rpc 00:03:53.740 ************************************ 00:03:53.740 02:51:47 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:53.740 * Looking for test storage... 00:03:53.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:53.740 02:51:47 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:53.740 02:51:47 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:53.740 02:51:47 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:53.740 02:51:47 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.740 02:51:47 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:53.740 02:51:47 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.740 02:51:47 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:53.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.740 --rc genhtml_branch_coverage=1 00:03:53.740 --rc genhtml_function_coverage=1 00:03:53.740 --rc genhtml_legend=1 00:03:53.740 --rc geninfo_all_blocks=1 00:03:53.740 --rc geninfo_unexecuted_blocks=1 00:03:53.740 00:03:53.740 ' 00:03:53.740 02:51:47 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:53.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.740 --rc genhtml_branch_coverage=1 00:03:53.740 --rc genhtml_function_coverage=1 00:03:53.740 --rc genhtml_legend=1 00:03:53.741 --rc geninfo_all_blocks=1 00:03:53.741 --rc geninfo_unexecuted_blocks=1 00:03:53.741 00:03:53.741 ' 00:03:53.741 02:51:47 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:53.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.741 --rc genhtml_branch_coverage=1 00:03:53.741 --rc genhtml_function_coverage=1 00:03:53.741 --rc genhtml_legend=1 00:03:53.741 --rc geninfo_all_blocks=1 00:03:53.741 --rc geninfo_unexecuted_blocks=1 00:03:53.741 00:03:53.741 ' 00:03:53.741 02:51:47 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:53.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.741 --rc genhtml_branch_coverage=1 00:03:53.741 --rc genhtml_function_coverage=1 00:03:53.741 --rc genhtml_legend=1 00:03:53.741 --rc geninfo_all_blocks=1 00:03:53.741 --rc geninfo_unexecuted_blocks=1 00:03:53.741 00:03:53.741 ' 00:03:53.741 02:51:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:53.741 02:51:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:53.741 02:51:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:53.741 02:51:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.741 02:51:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.741 02:51:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.741 ************************************ 00:03:53.741 START TEST skip_rpc 00:03:53.741 ************************************ 00:03:53.741 02:51:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:53.741 02:51:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57319 00:03:53.741 02:51:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:53.741 02:51:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:53.741 02:51:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:53.741 [2024-12-10 02:51:47.895659] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:03:53.741 [2024-12-10 02:51:47.895782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57319 ] 00:03:53.741 [2024-12-10 02:51:48.057438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.022 [2024-12-10 02:51:48.158220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57319 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57319 ']' 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57319 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57319 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:59.284 killing process with pid 57319 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57319' 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57319 00:03:59.284 02:51:52 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57319 00:04:00.216 00:04:00.216 real 0m6.571s 00:04:00.216 user 0m6.173s 00:04:00.216 sys 0m0.294s 00:04:00.216 02:51:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.216 ************************************ 00:04:00.216 END TEST skip_rpc 00:04:00.216 ************************************ 00:04:00.216 02:51:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.216 02:51:54 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:00.216 02:51:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.216 02:51:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.216 02:51:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.216 ************************************ 00:04:00.216 START TEST skip_rpc_with_json 00:04:00.216 ************************************ 00:04:00.216 02:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:00.216 02:51:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:00.216 02:51:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57418 00:04:00.216 02:51:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.216 02:51:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57418 00:04:00.216 02:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57418 ']' 00:04:00.216 02:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.216 02:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.217 02:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.217 02:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.217 02:51:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:00.217 02:51:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:00.217 [2024-12-10 02:51:54.526670] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:00.217 [2024-12-10 02:51:54.526800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57418 ] 00:04:00.474 [2024-12-10 02:51:54.686710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.474 [2024-12-10 02:51:54.789370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.040 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.040 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:01.040 02:51:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:01.040 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.040 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:01.040 [2024-12-10 02:51:55.407731] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:01.040 request: 00:04:01.040 { 00:04:01.040 "trtype": "tcp", 00:04:01.040 "method": "nvmf_get_transports", 00:04:01.040 "req_id": 1 00:04:01.040 } 00:04:01.040 Got JSON-RPC error response 00:04:01.040 response: 00:04:01.040 { 00:04:01.040 "code": -19, 00:04:01.040 "message": "No such device" 00:04:01.040 } 00:04:01.040 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:01.040 02:51:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:01.040 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.040 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:01.040 [2024-12-10 02:51:55.419857] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:01.298 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.298 02:51:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:01.298 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.298 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:01.298 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.298 02:51:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:01.298 { 00:04:01.298 "subsystems": [ 00:04:01.298 { 00:04:01.298 "subsystem": "fsdev", 00:04:01.298 "config": [ 00:04:01.298 { 00:04:01.298 "method": "fsdev_set_opts", 00:04:01.298 "params": { 00:04:01.298 "fsdev_io_pool_size": 65535, 00:04:01.298 "fsdev_io_cache_size": 256 00:04:01.298 } 00:04:01.298 } 00:04:01.298 ] 00:04:01.298 }, 00:04:01.298 { 00:04:01.298 "subsystem": "keyring", 00:04:01.298 "config": [] 00:04:01.298 }, 00:04:01.298 { 00:04:01.298 "subsystem": "iobuf", 00:04:01.298 "config": [ 00:04:01.299 { 00:04:01.299 "method": "iobuf_set_options", 00:04:01.299 "params": { 00:04:01.299 "small_pool_count": 8192, 00:04:01.299 "large_pool_count": 1024, 00:04:01.299 "small_bufsize": 8192, 00:04:01.299 "large_bufsize": 135168, 00:04:01.299 "enable_numa": false 00:04:01.299 } 00:04:01.299 } 00:04:01.299 ] 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "subsystem": "sock", 00:04:01.299 "config": [ 00:04:01.299 { 00:04:01.299 "method": "sock_set_default_impl", 00:04:01.299 "params": { 00:04:01.299 "impl_name": "posix" 00:04:01.299 } 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "method": "sock_impl_set_options", 00:04:01.299 "params": { 00:04:01.299 "impl_name": "ssl", 00:04:01.299 "recv_buf_size": 4096, 00:04:01.299 "send_buf_size": 4096, 00:04:01.299 "enable_recv_pipe": true, 00:04:01.299 "enable_quickack": false, 00:04:01.299 "enable_placement_id": 0, 00:04:01.299 "enable_zerocopy_send_server": true, 00:04:01.299 "enable_zerocopy_send_client": false, 00:04:01.299 "zerocopy_threshold": 0, 00:04:01.299 "tls_version": 0, 00:04:01.299 "enable_ktls": false 00:04:01.299 } 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "method": "sock_impl_set_options", 00:04:01.299 "params": { 00:04:01.299 "impl_name": "posix", 00:04:01.299 "recv_buf_size": 2097152, 00:04:01.299 "send_buf_size": 2097152, 00:04:01.299 "enable_recv_pipe": true, 00:04:01.299 "enable_quickack": false, 00:04:01.299 "enable_placement_id": 0, 00:04:01.299 "enable_zerocopy_send_server": true, 00:04:01.299 "enable_zerocopy_send_client": false, 00:04:01.299 "zerocopy_threshold": 0, 00:04:01.299 "tls_version": 0, 00:04:01.299 "enable_ktls": false 00:04:01.299 } 00:04:01.299 } 00:04:01.299 ] 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "subsystem": "vmd", 00:04:01.299 "config": [] 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "subsystem": "accel", 00:04:01.299 "config": [ 00:04:01.299 { 00:04:01.299 "method": "accel_set_options", 00:04:01.299 "params": { 00:04:01.299 "small_cache_size": 128, 00:04:01.299 "large_cache_size": 16, 00:04:01.299 "task_count": 2048, 00:04:01.299 "sequence_count": 2048, 00:04:01.299 "buf_count": 2048 00:04:01.299 } 00:04:01.299 } 00:04:01.299 ] 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "subsystem": "bdev", 00:04:01.299 "config": [ 00:04:01.299 { 00:04:01.299 "method": "bdev_set_options", 00:04:01.299 "params": { 00:04:01.299 "bdev_io_pool_size": 65535, 00:04:01.299 "bdev_io_cache_size": 256, 00:04:01.299 "bdev_auto_examine": true, 00:04:01.299 "iobuf_small_cache_size": 128, 00:04:01.299 "iobuf_large_cache_size": 16 00:04:01.299 } 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "method": "bdev_raid_set_options", 00:04:01.299 "params": { 00:04:01.299 "process_window_size_kb": 1024, 00:04:01.299 "process_max_bandwidth_mb_sec": 0 00:04:01.299 } 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "method": "bdev_iscsi_set_options", 00:04:01.299 "params": { 00:04:01.299 "timeout_sec": 30 00:04:01.299 } 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "method": "bdev_nvme_set_options", 00:04:01.299 "params": { 00:04:01.299 "action_on_timeout": "none", 00:04:01.299 "timeout_us": 0, 00:04:01.299 "timeout_admin_us": 0, 00:04:01.299 "keep_alive_timeout_ms": 10000, 00:04:01.299 "arbitration_burst": 0, 00:04:01.299 "low_priority_weight": 0, 00:04:01.299 "medium_priority_weight": 0, 00:04:01.299 "high_priority_weight": 0, 00:04:01.299 "nvme_adminq_poll_period_us": 10000, 00:04:01.299 "nvme_ioq_poll_period_us": 0, 00:04:01.299 "io_queue_requests": 0, 00:04:01.299 "delay_cmd_submit": true, 00:04:01.299 "transport_retry_count": 4, 00:04:01.299 "bdev_retry_count": 3, 00:04:01.299 "transport_ack_timeout": 0, 00:04:01.299 "ctrlr_loss_timeout_sec": 0, 00:04:01.299 "reconnect_delay_sec": 0, 00:04:01.299 "fast_io_fail_timeout_sec": 0, 00:04:01.299 "disable_auto_failback": false, 00:04:01.299 "generate_uuids": false, 00:04:01.299 "transport_tos": 0, 00:04:01.299 "nvme_error_stat": false, 00:04:01.299 "rdma_srq_size": 0, 00:04:01.299 "io_path_stat": false, 00:04:01.299 "allow_accel_sequence": false, 00:04:01.299 "rdma_max_cq_size": 0, 00:04:01.299 "rdma_cm_event_timeout_ms": 0, 00:04:01.299 "dhchap_digests": [ 00:04:01.299 "sha256", 00:04:01.299 "sha384", 00:04:01.299 "sha512" 00:04:01.299 ], 00:04:01.299 "dhchap_dhgroups": [ 00:04:01.299 "null", 00:04:01.299 "ffdhe2048", 00:04:01.299 "ffdhe3072", 00:04:01.299 "ffdhe4096", 00:04:01.299 "ffdhe6144", 00:04:01.299 "ffdhe8192" 00:04:01.299 ] 00:04:01.299 } 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "method": "bdev_nvme_set_hotplug", 00:04:01.299 "params": { 00:04:01.299 "period_us": 100000, 00:04:01.299 "enable": false 00:04:01.299 } 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "method": "bdev_wait_for_examine" 00:04:01.299 } 00:04:01.299 ] 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "subsystem": "scsi", 00:04:01.299 "config": null 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "subsystem": "scheduler", 00:04:01.299 "config": [ 00:04:01.299 { 00:04:01.299 "method": "framework_set_scheduler", 00:04:01.299 "params": { 00:04:01.299 "name": "static" 00:04:01.299 } 00:04:01.299 } 00:04:01.299 ] 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "subsystem": "vhost_scsi", 00:04:01.299 "config": [] 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "subsystem": "vhost_blk", 00:04:01.299 "config": [] 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "subsystem": "ublk", 00:04:01.299 "config": [] 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "subsystem": "nbd", 00:04:01.299 "config": [] 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "subsystem": "nvmf", 00:04:01.299 "config": [ 00:04:01.299 { 00:04:01.299 "method": "nvmf_set_config", 00:04:01.299 "params": { 00:04:01.299 "discovery_filter": "match_any", 00:04:01.299 "admin_cmd_passthru": { 00:04:01.299 "identify_ctrlr": false 00:04:01.299 }, 00:04:01.299 "dhchap_digests": [ 00:04:01.299 "sha256", 00:04:01.299 "sha384", 00:04:01.299 "sha512" 00:04:01.299 ], 00:04:01.299 "dhchap_dhgroups": [ 00:04:01.299 "null", 00:04:01.299 "ffdhe2048", 00:04:01.299 "ffdhe3072", 00:04:01.299 "ffdhe4096", 00:04:01.299 "ffdhe6144", 00:04:01.299 "ffdhe8192" 00:04:01.299 ] 00:04:01.299 } 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "method": "nvmf_set_max_subsystems", 00:04:01.299 "params": { 00:04:01.299 "max_subsystems": 1024 00:04:01.299 } 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "method": "nvmf_set_crdt", 00:04:01.299 "params": { 00:04:01.299 "crdt1": 0, 00:04:01.299 "crdt2": 0, 00:04:01.299 "crdt3": 0 00:04:01.299 } 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "method": "nvmf_create_transport", 00:04:01.299 "params": { 00:04:01.299 "trtype": "TCP", 00:04:01.299 "max_queue_depth": 128, 00:04:01.299 "max_io_qpairs_per_ctrlr": 127, 00:04:01.299 "in_capsule_data_size": 4096, 00:04:01.299 "max_io_size": 131072, 00:04:01.299 "io_unit_size": 131072, 00:04:01.299 "max_aq_depth": 128, 00:04:01.299 "num_shared_buffers": 511, 00:04:01.299 "buf_cache_size": 4294967295, 00:04:01.299 "dif_insert_or_strip": false, 00:04:01.299 "zcopy": false, 00:04:01.299 "c2h_success": true, 00:04:01.299 "sock_priority": 0, 00:04:01.299 "abort_timeout_sec": 1, 00:04:01.299 "ack_timeout": 0, 00:04:01.299 "data_wr_pool_size": 0 00:04:01.299 } 00:04:01.299 } 00:04:01.299 ] 00:04:01.299 }, 00:04:01.299 { 00:04:01.299 "subsystem": "iscsi", 00:04:01.299 "config": [ 00:04:01.299 { 00:04:01.299 "method": "iscsi_set_options", 00:04:01.299 "params": { 00:04:01.299 "node_base": "iqn.2016-06.io.spdk", 00:04:01.299 "max_sessions": 128, 00:04:01.299 "max_connections_per_session": 2, 00:04:01.299 "max_queue_depth": 64, 00:04:01.299 "default_time2wait": 2, 00:04:01.299 "default_time2retain": 20, 00:04:01.299 "first_burst_length": 8192, 00:04:01.299 "immediate_data": true, 00:04:01.299 "allow_duplicated_isid": false, 00:04:01.299 "error_recovery_level": 0, 00:04:01.299 "nop_timeout": 60, 00:04:01.299 "nop_in_interval": 30, 00:04:01.299 "disable_chap": false, 00:04:01.299 "require_chap": false, 00:04:01.299 "mutual_chap": false, 00:04:01.299 "chap_group": 0, 00:04:01.299 "max_large_datain_per_connection": 64, 00:04:01.299 "max_r2t_per_connection": 4, 00:04:01.299 "pdu_pool_size": 36864, 00:04:01.299 "immediate_data_pool_size": 16384, 00:04:01.299 "data_out_pool_size": 2048 00:04:01.299 } 00:04:01.299 } 00:04:01.299 ] 00:04:01.299 } 00:04:01.299 ] 00:04:01.299 } 00:04:01.299 02:51:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:01.299 02:51:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57418 00:04:01.299 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57418 ']' 00:04:01.299 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57418 00:04:01.299 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:01.299 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.299 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57418 00:04:01.300 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.300 killing process with pid 57418 00:04:01.300 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.300 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57418' 00:04:01.300 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57418 00:04:01.300 02:51:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57418 00:04:03.198 02:51:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57463 00:04:03.198 02:51:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:03.198 02:51:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:08.513 02:52:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57463 00:04:08.513 02:52:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57463 ']' 00:04:08.513 02:52:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57463 00:04:08.513 02:52:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:08.513 02:52:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.513 02:52:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57463 00:04:08.513 02:52:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.513 killing process with pid 57463 00:04:08.514 02:52:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.514 02:52:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57463' 00:04:08.514 02:52:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57463 00:04:08.514 02:52:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57463 00:04:09.078 02:52:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:09.078 02:52:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:09.078 00:04:09.078 real 0m8.974s 00:04:09.078 user 0m8.531s 00:04:09.078 sys 0m0.657s 00:04:09.078 02:52:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.078 ************************************ 00:04:09.079 02:52:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.079 END TEST skip_rpc_with_json 00:04:09.079 ************************************ 00:04:09.079 02:52:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:09.079 02:52:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.079 02:52:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.079 02:52:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.336 ************************************ 00:04:09.336 START TEST skip_rpc_with_delay 00:04:09.336 ************************************ 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:09.336 [2024-12-10 02:52:03.544510] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:09.336 00:04:09.336 real 0m0.130s 00:04:09.336 user 0m0.070s 00:04:09.336 sys 0m0.058s 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.336 02:52:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:09.336 ************************************ 00:04:09.336 END TEST skip_rpc_with_delay 00:04:09.336 ************************************ 00:04:09.336 02:52:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:09.336 02:52:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:09.336 02:52:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:09.336 02:52:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.336 02:52:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.336 02:52:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.336 ************************************ 00:04:09.336 START TEST exit_on_failed_rpc_init 00:04:09.336 ************************************ 00:04:09.336 02:52:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:09.336 02:52:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57580 00:04:09.336 02:52:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:09.336 02:52:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57580 00:04:09.336 02:52:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57580 ']' 00:04:09.336 02:52:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.336 02:52:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.336 02:52:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.336 02:52:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.336 02:52:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:09.594 [2024-12-10 02:52:03.731806] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:09.594 [2024-12-10 02:52:03.731935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57580 ] 00:04:09.594 [2024-12-10 02:52:03.887912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.594 [2024-12-10 02:52:03.973606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.527 02:52:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.527 02:52:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:10.527 02:52:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.527 02:52:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.527 02:52:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:10.527 02:52:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.527 02:52:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:10.527 02:52:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.527 02:52:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:10.527 02:52:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.527 02:52:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:10.527 02:52:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.527 02:52:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:10.527 02:52:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:10.527 02:52:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.527 [2024-12-10 02:52:04.638465] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:10.527 [2024-12-10 02:52:04.638634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57598 ] 00:04:10.527 [2024-12-10 02:52:04.807039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.784 [2024-12-10 02:52:04.910977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.784 [2024-12-10 02:52:04.911055] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:10.784 [2024-12-10 02:52:04.911068] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:10.784 [2024-12-10 02:52:04.911082] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57580 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57580 ']' 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57580 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57580 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.784 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57580' 00:04:10.784 killing process with pid 57580 00:04:10.785 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57580 00:04:10.785 02:52:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57580 00:04:12.154 ************************************ 00:04:12.154 END TEST exit_on_failed_rpc_init 00:04:12.154 00:04:12.154 real 0m2.724s 00:04:12.154 user 0m3.040s 00:04:12.154 sys 0m0.441s 00:04:12.154 02:52:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.154 02:52:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.154 ************************************ 00:04:12.154 02:52:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:12.154 00:04:12.154 real 0m18.710s 00:04:12.154 user 0m17.944s 00:04:12.154 sys 0m1.621s 00:04:12.154 02:52:06 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.154 ************************************ 00:04:12.154 END TEST skip_rpc 00:04:12.154 ************************************ 00:04:12.154 02:52:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.154 02:52:06 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:12.154 02:52:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.154 02:52:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.154 02:52:06 -- common/autotest_common.sh@10 -- # set +x 00:04:12.154 ************************************ 00:04:12.154 START TEST rpc_client 00:04:12.154 ************************************ 00:04:12.154 02:52:06 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:12.154 * Looking for test storage... 00:04:12.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:12.154 02:52:06 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:12.154 02:52:06 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:12.154 02:52:06 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:12.413 02:52:06 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.413 02:52:06 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:12.413 02:52:06 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.413 02:52:06 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:12.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.413 --rc genhtml_branch_coverage=1 00:04:12.413 --rc genhtml_function_coverage=1 00:04:12.413 --rc genhtml_legend=1 00:04:12.413 --rc geninfo_all_blocks=1 00:04:12.413 --rc geninfo_unexecuted_blocks=1 00:04:12.413 00:04:12.413 ' 00:04:12.413 02:52:06 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:12.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.413 --rc genhtml_branch_coverage=1 00:04:12.413 --rc genhtml_function_coverage=1 00:04:12.413 --rc genhtml_legend=1 00:04:12.413 --rc geninfo_all_blocks=1 00:04:12.413 --rc geninfo_unexecuted_blocks=1 00:04:12.413 00:04:12.413 ' 00:04:12.413 02:52:06 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:12.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.413 --rc genhtml_branch_coverage=1 00:04:12.413 --rc genhtml_function_coverage=1 00:04:12.413 --rc genhtml_legend=1 00:04:12.413 --rc geninfo_all_blocks=1 00:04:12.413 --rc geninfo_unexecuted_blocks=1 00:04:12.413 00:04:12.413 ' 00:04:12.413 02:52:06 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:12.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.413 --rc genhtml_branch_coverage=1 00:04:12.413 --rc genhtml_function_coverage=1 00:04:12.413 --rc genhtml_legend=1 00:04:12.413 --rc geninfo_all_blocks=1 00:04:12.413 --rc geninfo_unexecuted_blocks=1 00:04:12.413 00:04:12.413 ' 00:04:12.413 02:52:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:12.413 OK 00:04:12.413 02:52:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:12.413 00:04:12.413 real 0m0.168s 00:04:12.413 user 0m0.099s 00:04:12.413 sys 0m0.074s 00:04:12.413 02:52:06 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.413 ************************************ 00:04:12.413 END TEST rpc_client 00:04:12.413 ************************************ 00:04:12.413 02:52:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:12.413 02:52:06 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:12.413 02:52:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.413 02:52:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.413 02:52:06 -- common/autotest_common.sh@10 -- # set +x 00:04:12.413 ************************************ 00:04:12.413 START TEST json_config 00:04:12.413 ************************************ 00:04:12.413 02:52:06 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:12.413 02:52:06 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:12.413 02:52:06 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:12.413 02:52:06 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:12.413 02:52:06 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:12.413 02:52:06 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.413 02:52:06 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.413 02:52:06 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.413 02:52:06 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.413 02:52:06 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.413 02:52:06 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.413 02:52:06 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.413 02:52:06 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.413 02:52:06 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.413 02:52:06 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.413 02:52:06 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.413 02:52:06 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:12.413 02:52:06 json_config -- scripts/common.sh@345 -- # : 1 00:04:12.413 02:52:06 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.413 02:52:06 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.413 02:52:06 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:12.414 02:52:06 json_config -- scripts/common.sh@353 -- # local d=1 00:04:12.414 02:52:06 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.414 02:52:06 json_config -- scripts/common.sh@355 -- # echo 1 00:04:12.414 02:52:06 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.414 02:52:06 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:12.414 02:52:06 json_config -- scripts/common.sh@353 -- # local d=2 00:04:12.414 02:52:06 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.414 02:52:06 json_config -- scripts/common.sh@355 -- # echo 2 00:04:12.414 02:52:06 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.414 02:52:06 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.414 02:52:06 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.414 02:52:06 json_config -- scripts/common.sh@368 -- # return 0 00:04:12.414 02:52:06 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.414 02:52:06 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:12.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.414 --rc genhtml_branch_coverage=1 00:04:12.414 --rc genhtml_function_coverage=1 00:04:12.414 --rc genhtml_legend=1 00:04:12.414 --rc geninfo_all_blocks=1 00:04:12.414 --rc geninfo_unexecuted_blocks=1 00:04:12.414 00:04:12.414 ' 00:04:12.414 02:52:06 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:12.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.414 --rc genhtml_branch_coverage=1 00:04:12.414 --rc genhtml_function_coverage=1 00:04:12.414 --rc genhtml_legend=1 00:04:12.414 --rc geninfo_all_blocks=1 00:04:12.414 --rc geninfo_unexecuted_blocks=1 00:04:12.414 00:04:12.414 ' 00:04:12.414 02:52:06 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:12.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.414 --rc genhtml_branch_coverage=1 00:04:12.414 --rc genhtml_function_coverage=1 00:04:12.414 --rc genhtml_legend=1 00:04:12.414 --rc geninfo_all_blocks=1 00:04:12.414 --rc geninfo_unexecuted_blocks=1 00:04:12.414 00:04:12.414 ' 00:04:12.414 02:52:06 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:12.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.414 --rc genhtml_branch_coverage=1 00:04:12.414 --rc genhtml_function_coverage=1 00:04:12.414 --rc genhtml_legend=1 00:04:12.414 --rc geninfo_all_blocks=1 00:04:12.414 --rc geninfo_unexecuted_blocks=1 00:04:12.414 00:04:12.414 ' 00:04:12.414 02:52:06 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0786ddb0-789e-43ba-ae3b-8cda27c29efa 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0786ddb0-789e-43ba-ae3b-8cda27c29efa 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:12.414 02:52:06 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:12.414 02:52:06 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:12.414 02:52:06 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:12.414 02:52:06 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:12.414 02:52:06 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.414 02:52:06 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.414 02:52:06 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.414 02:52:06 json_config -- paths/export.sh@5 -- # export PATH 00:04:12.414 02:52:06 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@51 -- # : 0 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:12.414 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:12.414 02:52:06 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:12.414 02:52:06 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:12.414 02:52:06 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:12.414 02:52:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:12.414 02:52:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:12.414 02:52:06 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:12.414 WARNING: No tests are enabled so not running JSON configuration tests 00:04:12.414 02:52:06 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:12.414 02:52:06 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:12.414 ************************************ 00:04:12.414 END TEST json_config 00:04:12.414 ************************************ 00:04:12.414 00:04:12.414 real 0m0.128s 00:04:12.414 user 0m0.083s 00:04:12.414 sys 0m0.050s 00:04:12.414 02:52:06 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.414 02:52:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.414 02:52:06 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:12.414 02:52:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.414 02:52:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.414 02:52:06 -- common/autotest_common.sh@10 -- # set +x 00:04:12.414 ************************************ 00:04:12.414 START TEST json_config_extra_key 00:04:12.414 ************************************ 00:04:12.414 02:52:06 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:12.687 02:52:06 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:12.687 02:52:06 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:12.687 02:52:06 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:12.687 02:52:06 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.687 02:52:06 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:12.687 02:52:06 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.687 02:52:06 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:12.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.687 --rc genhtml_branch_coverage=1 00:04:12.687 --rc genhtml_function_coverage=1 00:04:12.687 --rc genhtml_legend=1 00:04:12.687 --rc geninfo_all_blocks=1 00:04:12.687 --rc geninfo_unexecuted_blocks=1 00:04:12.687 00:04:12.687 ' 00:04:12.687 02:52:06 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:12.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.687 --rc genhtml_branch_coverage=1 00:04:12.687 --rc genhtml_function_coverage=1 00:04:12.687 --rc genhtml_legend=1 00:04:12.687 --rc geninfo_all_blocks=1 00:04:12.687 --rc geninfo_unexecuted_blocks=1 00:04:12.687 00:04:12.687 ' 00:04:12.687 02:52:06 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:12.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.687 --rc genhtml_branch_coverage=1 00:04:12.687 --rc genhtml_function_coverage=1 00:04:12.687 --rc genhtml_legend=1 00:04:12.687 --rc geninfo_all_blocks=1 00:04:12.687 --rc geninfo_unexecuted_blocks=1 00:04:12.687 00:04:12.687 ' 00:04:12.688 02:52:06 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:12.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.688 --rc genhtml_branch_coverage=1 00:04:12.688 --rc genhtml_function_coverage=1 00:04:12.688 --rc genhtml_legend=1 00:04:12.688 --rc geninfo_all_blocks=1 00:04:12.688 --rc geninfo_unexecuted_blocks=1 00:04:12.688 00:04:12.688 ' 00:04:12.688 02:52:06 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0786ddb0-789e-43ba-ae3b-8cda27c29efa 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0786ddb0-789e-43ba-ae3b-8cda27c29efa 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:12.688 02:52:06 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:12.688 02:52:06 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:12.688 02:52:06 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:12.688 02:52:06 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:12.688 02:52:06 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.688 02:52:06 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.688 02:52:06 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.688 02:52:06 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:12.688 02:52:06 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:12.688 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:12.688 02:52:06 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:12.688 02:52:06 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:12.688 02:52:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:12.688 02:52:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:12.688 02:52:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:12.688 02:52:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:12.688 02:52:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:12.688 02:52:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:12.688 02:52:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:12.688 02:52:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:12.688 02:52:06 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:12.688 INFO: launching applications... 00:04:12.688 02:52:06 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:12.688 02:52:06 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:12.688 02:52:06 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:12.688 02:52:06 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:12.688 02:52:06 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:12.688 02:52:06 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:12.688 02:52:06 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:12.688 02:52:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.688 02:52:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.688 02:52:06 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57791 00:04:12.688 Waiting for target to run... 00:04:12.688 02:52:06 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:12.688 02:52:06 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57791 /var/tmp/spdk_tgt.sock 00:04:12.688 02:52:06 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:12.688 02:52:06 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57791 ']' 00:04:12.688 02:52:06 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:12.688 02:52:06 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:12.688 02:52:06 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:12.688 02:52:06 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.688 02:52:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:12.688 [2024-12-10 02:52:07.037840] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:12.688 [2024-12-10 02:52:07.038038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57791 ] 00:04:13.254 [2024-12-10 02:52:07.358209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.254 [2024-12-10 02:52:07.436985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.511 02:52:07 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.512 02:52:07 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:13.512 00:04:13.512 02:52:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:13.512 INFO: shutting down applications... 00:04:13.512 02:52:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:13.512 02:52:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:13.512 02:52:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:13.512 02:52:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:13.512 02:52:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57791 ]] 00:04:13.512 02:52:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57791 00:04:13.512 02:52:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:13.512 02:52:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:13.512 02:52:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57791 00:04:13.512 02:52:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:14.078 02:52:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:14.078 02:52:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.078 02:52:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57791 00:04:14.078 02:52:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:14.642 02:52:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:14.642 02:52:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.642 02:52:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57791 00:04:14.642 02:52:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:15.207 02:52:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:15.207 02:52:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.207 02:52:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57791 00:04:15.207 02:52:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:15.207 02:52:09 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:15.207 02:52:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:15.207 SPDK target shutdown done 00:04:15.207 02:52:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:15.207 Success 00:04:15.207 02:52:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:15.207 00:04:15.207 real 0m2.564s 00:04:15.207 user 0m2.299s 00:04:15.207 sys 0m0.409s 00:04:15.207 02:52:09 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.207 02:52:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:15.207 ************************************ 00:04:15.207 END TEST json_config_extra_key 00:04:15.207 ************************************ 00:04:15.207 02:52:09 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:15.207 02:52:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.207 02:52:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.207 02:52:09 -- common/autotest_common.sh@10 -- # set +x 00:04:15.207 ************************************ 00:04:15.207 START TEST alias_rpc 00:04:15.207 ************************************ 00:04:15.207 02:52:09 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:15.207 * Looking for test storage... 00:04:15.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:15.207 02:52:09 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.207 02:52:09 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.207 02:52:09 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.207 02:52:09 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.207 02:52:09 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.207 02:52:09 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.207 02:52:09 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.207 02:52:09 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.208 02:52:09 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:15.208 02:52:09 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.208 02:52:09 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.208 --rc genhtml_branch_coverage=1 00:04:15.208 --rc genhtml_function_coverage=1 00:04:15.208 --rc genhtml_legend=1 00:04:15.208 --rc geninfo_all_blocks=1 00:04:15.208 --rc geninfo_unexecuted_blocks=1 00:04:15.208 00:04:15.208 ' 00:04:15.208 02:52:09 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.208 --rc genhtml_branch_coverage=1 00:04:15.208 --rc genhtml_function_coverage=1 00:04:15.208 --rc genhtml_legend=1 00:04:15.208 --rc geninfo_all_blocks=1 00:04:15.208 --rc geninfo_unexecuted_blocks=1 00:04:15.208 00:04:15.208 ' 00:04:15.208 02:52:09 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.208 --rc genhtml_branch_coverage=1 00:04:15.208 --rc genhtml_function_coverage=1 00:04:15.208 --rc genhtml_legend=1 00:04:15.208 --rc geninfo_all_blocks=1 00:04:15.208 --rc geninfo_unexecuted_blocks=1 00:04:15.208 00:04:15.208 ' 00:04:15.208 02:52:09 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.208 --rc genhtml_branch_coverage=1 00:04:15.208 --rc genhtml_function_coverage=1 00:04:15.208 --rc genhtml_legend=1 00:04:15.208 --rc geninfo_all_blocks=1 00:04:15.208 --rc geninfo_unexecuted_blocks=1 00:04:15.208 00:04:15.208 ' 00:04:15.208 02:52:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:15.208 02:52:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57878 00:04:15.208 02:52:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57878 00:04:15.208 02:52:09 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57878 ']' 00:04:15.208 02:52:09 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.208 02:52:09 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.208 02:52:09 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.208 02:52:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.208 02:52:09 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.208 02:52:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.470 [2024-12-10 02:52:09.612960] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:15.470 [2024-12-10 02:52:09.613080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57878 ] 00:04:15.470 [2024-12-10 02:52:09.771548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.728 [2024-12-10 02:52:09.868909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.294 02:52:10 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.294 02:52:10 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:16.294 02:52:10 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:16.551 02:52:10 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57878 00:04:16.551 02:52:10 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57878 ']' 00:04:16.551 02:52:10 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57878 00:04:16.551 02:52:10 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:16.551 02:52:10 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.551 02:52:10 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57878 00:04:16.551 02:52:10 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.551 killing process with pid 57878 00:04:16.551 02:52:10 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.551 02:52:10 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57878' 00:04:16.551 02:52:10 alias_rpc -- common/autotest_common.sh@973 -- # kill 57878 00:04:16.551 02:52:10 alias_rpc -- common/autotest_common.sh@978 -- # wait 57878 00:04:17.955 00:04:17.955 real 0m2.832s 00:04:17.955 user 0m2.933s 00:04:17.955 sys 0m0.399s 00:04:17.955 02:52:12 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.955 ************************************ 00:04:17.955 END TEST alias_rpc 00:04:17.955 ************************************ 00:04:17.955 02:52:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.955 02:52:12 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:17.955 02:52:12 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:17.955 02:52:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.955 02:52:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.955 02:52:12 -- common/autotest_common.sh@10 -- # set +x 00:04:17.955 ************************************ 00:04:17.955 START TEST spdkcli_tcp 00:04:17.955 ************************************ 00:04:17.955 02:52:12 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:17.955 * Looking for test storage... 00:04:17.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:17.955 02:52:12 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:17.955 02:52:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:17.955 02:52:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:18.213 02:52:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:18.213 02:52:12 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.213 02:52:12 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.213 02:52:12 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.213 02:52:12 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.214 02:52:12 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:18.214 02:52:12 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.214 02:52:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:18.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.214 --rc genhtml_branch_coverage=1 00:04:18.214 --rc genhtml_function_coverage=1 00:04:18.214 --rc genhtml_legend=1 00:04:18.214 --rc geninfo_all_blocks=1 00:04:18.214 --rc geninfo_unexecuted_blocks=1 00:04:18.214 00:04:18.214 ' 00:04:18.214 02:52:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:18.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.214 --rc genhtml_branch_coverage=1 00:04:18.214 --rc genhtml_function_coverage=1 00:04:18.214 --rc genhtml_legend=1 00:04:18.214 --rc geninfo_all_blocks=1 00:04:18.214 --rc geninfo_unexecuted_blocks=1 00:04:18.214 00:04:18.214 ' 00:04:18.214 02:52:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:18.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.214 --rc genhtml_branch_coverage=1 00:04:18.214 --rc genhtml_function_coverage=1 00:04:18.214 --rc genhtml_legend=1 00:04:18.214 --rc geninfo_all_blocks=1 00:04:18.214 --rc geninfo_unexecuted_blocks=1 00:04:18.214 00:04:18.214 ' 00:04:18.214 02:52:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:18.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.214 --rc genhtml_branch_coverage=1 00:04:18.214 --rc genhtml_function_coverage=1 00:04:18.214 --rc genhtml_legend=1 00:04:18.214 --rc geninfo_all_blocks=1 00:04:18.214 --rc geninfo_unexecuted_blocks=1 00:04:18.214 00:04:18.214 ' 00:04:18.214 02:52:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:18.214 02:52:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:18.214 02:52:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:18.214 02:52:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:18.214 02:52:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:18.214 02:52:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:18.214 02:52:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:18.214 02:52:12 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.214 02:52:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:18.214 02:52:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57974 00:04:18.214 02:52:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57974 00:04:18.214 02:52:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:18.214 02:52:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57974 ']' 00:04:18.214 02:52:12 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.214 02:52:12 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.214 02:52:12 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.214 02:52:12 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.214 02:52:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:18.214 [2024-12-10 02:52:12.474499] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:18.214 [2024-12-10 02:52:12.474625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57974 ] 00:04:18.471 [2024-12-10 02:52:12.635475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.471 [2024-12-10 02:52:12.738167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.471 [2024-12-10 02:52:12.738344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.036 02:52:13 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.036 02:52:13 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:19.036 02:52:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57985 00:04:19.036 02:52:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:19.036 02:52:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:19.295 [ 00:04:19.295 "bdev_malloc_delete", 00:04:19.295 "bdev_malloc_create", 00:04:19.295 "bdev_null_resize", 00:04:19.295 "bdev_null_delete", 00:04:19.295 "bdev_null_create", 00:04:19.295 "bdev_nvme_cuse_unregister", 00:04:19.295 "bdev_nvme_cuse_register", 00:04:19.295 "bdev_opal_new_user", 00:04:19.295 "bdev_opal_set_lock_state", 00:04:19.295 "bdev_opal_delete", 00:04:19.295 "bdev_opal_get_info", 00:04:19.295 "bdev_opal_create", 00:04:19.295 "bdev_nvme_opal_revert", 00:04:19.295 "bdev_nvme_opal_init", 00:04:19.295 "bdev_nvme_send_cmd", 00:04:19.295 "bdev_nvme_set_keys", 00:04:19.295 "bdev_nvme_get_path_iostat", 00:04:19.295 "bdev_nvme_get_mdns_discovery_info", 00:04:19.295 "bdev_nvme_stop_mdns_discovery", 00:04:19.295 "bdev_nvme_start_mdns_discovery", 00:04:19.295 "bdev_nvme_set_multipath_policy", 00:04:19.295 "bdev_nvme_set_preferred_path", 00:04:19.295 "bdev_nvme_get_io_paths", 00:04:19.295 "bdev_nvme_remove_error_injection", 00:04:19.295 "bdev_nvme_add_error_injection", 00:04:19.295 "bdev_nvme_get_discovery_info", 00:04:19.295 "bdev_nvme_stop_discovery", 00:04:19.295 "bdev_nvme_start_discovery", 00:04:19.295 "bdev_nvme_get_controller_health_info", 00:04:19.295 "bdev_nvme_disable_controller", 00:04:19.295 "bdev_nvme_enable_controller", 00:04:19.295 "bdev_nvme_reset_controller", 00:04:19.295 "bdev_nvme_get_transport_statistics", 00:04:19.295 "bdev_nvme_apply_firmware", 00:04:19.295 "bdev_nvme_detach_controller", 00:04:19.295 "bdev_nvme_get_controllers", 00:04:19.295 "bdev_nvme_attach_controller", 00:04:19.295 "bdev_nvme_set_hotplug", 00:04:19.295 "bdev_nvme_set_options", 00:04:19.295 "bdev_passthru_delete", 00:04:19.295 "bdev_passthru_create", 00:04:19.295 "bdev_lvol_set_parent_bdev", 00:04:19.295 "bdev_lvol_set_parent", 00:04:19.295 "bdev_lvol_check_shallow_copy", 00:04:19.295 "bdev_lvol_start_shallow_copy", 00:04:19.295 "bdev_lvol_grow_lvstore", 00:04:19.295 "bdev_lvol_get_lvols", 00:04:19.295 "bdev_lvol_get_lvstores", 00:04:19.295 "bdev_lvol_delete", 00:04:19.295 "bdev_lvol_set_read_only", 00:04:19.295 "bdev_lvol_resize", 00:04:19.295 "bdev_lvol_decouple_parent", 00:04:19.295 "bdev_lvol_inflate", 00:04:19.295 "bdev_lvol_rename", 00:04:19.295 "bdev_lvol_clone_bdev", 00:04:19.295 "bdev_lvol_clone", 00:04:19.295 "bdev_lvol_snapshot", 00:04:19.295 "bdev_lvol_create", 00:04:19.295 "bdev_lvol_delete_lvstore", 00:04:19.295 "bdev_lvol_rename_lvstore", 00:04:19.295 "bdev_lvol_create_lvstore", 00:04:19.295 "bdev_raid_set_options", 00:04:19.295 "bdev_raid_remove_base_bdev", 00:04:19.295 "bdev_raid_add_base_bdev", 00:04:19.295 "bdev_raid_delete", 00:04:19.295 "bdev_raid_create", 00:04:19.295 "bdev_raid_get_bdevs", 00:04:19.295 "bdev_error_inject_error", 00:04:19.295 "bdev_error_delete", 00:04:19.295 "bdev_error_create", 00:04:19.295 "bdev_split_delete", 00:04:19.295 "bdev_split_create", 00:04:19.295 "bdev_delay_delete", 00:04:19.295 "bdev_delay_create", 00:04:19.295 "bdev_delay_update_latency", 00:04:19.295 "bdev_zone_block_delete", 00:04:19.295 "bdev_zone_block_create", 00:04:19.295 "blobfs_create", 00:04:19.295 "blobfs_detect", 00:04:19.295 "blobfs_set_cache_size", 00:04:19.295 "bdev_xnvme_delete", 00:04:19.295 "bdev_xnvme_create", 00:04:19.295 "bdev_aio_delete", 00:04:19.295 "bdev_aio_rescan", 00:04:19.295 "bdev_aio_create", 00:04:19.295 "bdev_ftl_set_property", 00:04:19.295 "bdev_ftl_get_properties", 00:04:19.295 "bdev_ftl_get_stats", 00:04:19.295 "bdev_ftl_unmap", 00:04:19.295 "bdev_ftl_unload", 00:04:19.295 "bdev_ftl_delete", 00:04:19.295 "bdev_ftl_load", 00:04:19.295 "bdev_ftl_create", 00:04:19.295 "bdev_virtio_attach_controller", 00:04:19.295 "bdev_virtio_scsi_get_devices", 00:04:19.295 "bdev_virtio_detach_controller", 00:04:19.295 "bdev_virtio_blk_set_hotplug", 00:04:19.295 "bdev_iscsi_delete", 00:04:19.295 "bdev_iscsi_create", 00:04:19.295 "bdev_iscsi_set_options", 00:04:19.295 "accel_error_inject_error", 00:04:19.295 "ioat_scan_accel_module", 00:04:19.295 "dsa_scan_accel_module", 00:04:19.295 "iaa_scan_accel_module", 00:04:19.295 "keyring_file_remove_key", 00:04:19.295 "keyring_file_add_key", 00:04:19.295 "keyring_linux_set_options", 00:04:19.295 "fsdev_aio_delete", 00:04:19.295 "fsdev_aio_create", 00:04:19.295 "iscsi_get_histogram", 00:04:19.295 "iscsi_enable_histogram", 00:04:19.295 "iscsi_set_options", 00:04:19.295 "iscsi_get_auth_groups", 00:04:19.295 "iscsi_auth_group_remove_secret", 00:04:19.295 "iscsi_auth_group_add_secret", 00:04:19.295 "iscsi_delete_auth_group", 00:04:19.295 "iscsi_create_auth_group", 00:04:19.295 "iscsi_set_discovery_auth", 00:04:19.295 "iscsi_get_options", 00:04:19.295 "iscsi_target_node_request_logout", 00:04:19.295 "iscsi_target_node_set_redirect", 00:04:19.295 "iscsi_target_node_set_auth", 00:04:19.295 "iscsi_target_node_add_lun", 00:04:19.295 "iscsi_get_stats", 00:04:19.295 "iscsi_get_connections", 00:04:19.295 "iscsi_portal_group_set_auth", 00:04:19.295 "iscsi_start_portal_group", 00:04:19.295 "iscsi_delete_portal_group", 00:04:19.295 "iscsi_create_portal_group", 00:04:19.295 "iscsi_get_portal_groups", 00:04:19.295 "iscsi_delete_target_node", 00:04:19.295 "iscsi_target_node_remove_pg_ig_maps", 00:04:19.295 "iscsi_target_node_add_pg_ig_maps", 00:04:19.295 "iscsi_create_target_node", 00:04:19.295 "iscsi_get_target_nodes", 00:04:19.295 "iscsi_delete_initiator_group", 00:04:19.295 "iscsi_initiator_group_remove_initiators", 00:04:19.295 "iscsi_initiator_group_add_initiators", 00:04:19.295 "iscsi_create_initiator_group", 00:04:19.295 "iscsi_get_initiator_groups", 00:04:19.295 "nvmf_set_crdt", 00:04:19.295 "nvmf_set_config", 00:04:19.295 "nvmf_set_max_subsystems", 00:04:19.295 "nvmf_stop_mdns_prr", 00:04:19.295 "nvmf_publish_mdns_prr", 00:04:19.295 "nvmf_subsystem_get_listeners", 00:04:19.295 "nvmf_subsystem_get_qpairs", 00:04:19.295 "nvmf_subsystem_get_controllers", 00:04:19.295 "nvmf_get_stats", 00:04:19.295 "nvmf_get_transports", 00:04:19.295 "nvmf_create_transport", 00:04:19.295 "nvmf_get_targets", 00:04:19.295 "nvmf_delete_target", 00:04:19.295 "nvmf_create_target", 00:04:19.295 "nvmf_subsystem_allow_any_host", 00:04:19.295 "nvmf_subsystem_set_keys", 00:04:19.295 "nvmf_subsystem_remove_host", 00:04:19.295 "nvmf_subsystem_add_host", 00:04:19.295 "nvmf_ns_remove_host", 00:04:19.295 "nvmf_ns_add_host", 00:04:19.295 "nvmf_subsystem_remove_ns", 00:04:19.295 "nvmf_subsystem_set_ns_ana_group", 00:04:19.295 "nvmf_subsystem_add_ns", 00:04:19.295 "nvmf_subsystem_listener_set_ana_state", 00:04:19.295 "nvmf_discovery_get_referrals", 00:04:19.295 "nvmf_discovery_remove_referral", 00:04:19.295 "nvmf_discovery_add_referral", 00:04:19.295 "nvmf_subsystem_remove_listener", 00:04:19.295 "nvmf_subsystem_add_listener", 00:04:19.295 "nvmf_delete_subsystem", 00:04:19.295 "nvmf_create_subsystem", 00:04:19.295 "nvmf_get_subsystems", 00:04:19.295 "env_dpdk_get_mem_stats", 00:04:19.295 "nbd_get_disks", 00:04:19.295 "nbd_stop_disk", 00:04:19.295 "nbd_start_disk", 00:04:19.295 "ublk_recover_disk", 00:04:19.295 "ublk_get_disks", 00:04:19.295 "ublk_stop_disk", 00:04:19.295 "ublk_start_disk", 00:04:19.295 "ublk_destroy_target", 00:04:19.295 "ublk_create_target", 00:04:19.295 "virtio_blk_create_transport", 00:04:19.295 "virtio_blk_get_transports", 00:04:19.295 "vhost_controller_set_coalescing", 00:04:19.295 "vhost_get_controllers", 00:04:19.295 "vhost_delete_controller", 00:04:19.295 "vhost_create_blk_controller", 00:04:19.295 "vhost_scsi_controller_remove_target", 00:04:19.295 "vhost_scsi_controller_add_target", 00:04:19.295 "vhost_start_scsi_controller", 00:04:19.295 "vhost_create_scsi_controller", 00:04:19.295 "thread_set_cpumask", 00:04:19.295 "scheduler_set_options", 00:04:19.295 "framework_get_governor", 00:04:19.295 "framework_get_scheduler", 00:04:19.295 "framework_set_scheduler", 00:04:19.295 "framework_get_reactors", 00:04:19.295 "thread_get_io_channels", 00:04:19.295 "thread_get_pollers", 00:04:19.295 "thread_get_stats", 00:04:19.295 "framework_monitor_context_switch", 00:04:19.295 "spdk_kill_instance", 00:04:19.295 "log_enable_timestamps", 00:04:19.295 "log_get_flags", 00:04:19.295 "log_clear_flag", 00:04:19.295 "log_set_flag", 00:04:19.295 "log_get_level", 00:04:19.295 "log_set_level", 00:04:19.295 "log_get_print_level", 00:04:19.295 "log_set_print_level", 00:04:19.295 "framework_enable_cpumask_locks", 00:04:19.295 "framework_disable_cpumask_locks", 00:04:19.295 "framework_wait_init", 00:04:19.295 "framework_start_init", 00:04:19.295 "scsi_get_devices", 00:04:19.295 "bdev_get_histogram", 00:04:19.295 "bdev_enable_histogram", 00:04:19.295 "bdev_set_qos_limit", 00:04:19.295 "bdev_set_qd_sampling_period", 00:04:19.295 "bdev_get_bdevs", 00:04:19.295 "bdev_reset_iostat", 00:04:19.295 "bdev_get_iostat", 00:04:19.295 "bdev_examine", 00:04:19.295 "bdev_wait_for_examine", 00:04:19.295 "bdev_set_options", 00:04:19.295 "accel_get_stats", 00:04:19.295 "accel_set_options", 00:04:19.295 "accel_set_driver", 00:04:19.295 "accel_crypto_key_destroy", 00:04:19.295 "accel_crypto_keys_get", 00:04:19.295 "accel_crypto_key_create", 00:04:19.295 "accel_assign_opc", 00:04:19.295 "accel_get_module_info", 00:04:19.295 "accel_get_opc_assignments", 00:04:19.295 "vmd_rescan", 00:04:19.295 "vmd_remove_device", 00:04:19.295 "vmd_enable", 00:04:19.295 "sock_get_default_impl", 00:04:19.295 "sock_set_default_impl", 00:04:19.295 "sock_impl_set_options", 00:04:19.296 "sock_impl_get_options", 00:04:19.296 "iobuf_get_stats", 00:04:19.296 "iobuf_set_options", 00:04:19.296 "keyring_get_keys", 00:04:19.296 "framework_get_pci_devices", 00:04:19.296 "framework_get_config", 00:04:19.296 "framework_get_subsystems", 00:04:19.296 "fsdev_set_opts", 00:04:19.296 "fsdev_get_opts", 00:04:19.296 "trace_get_info", 00:04:19.296 "trace_get_tpoint_group_mask", 00:04:19.296 "trace_disable_tpoint_group", 00:04:19.296 "trace_enable_tpoint_group", 00:04:19.296 "trace_clear_tpoint_mask", 00:04:19.296 "trace_set_tpoint_mask", 00:04:19.296 "notify_get_notifications", 00:04:19.296 "notify_get_types", 00:04:19.296 "spdk_get_version", 00:04:19.296 "rpc_get_methods" 00:04:19.296 ] 00:04:19.296 02:52:13 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:19.296 02:52:13 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:19.296 02:52:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:19.296 02:52:13 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:19.296 02:52:13 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57974 00:04:19.296 02:52:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57974 ']' 00:04:19.296 02:52:13 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57974 00:04:19.296 02:52:13 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:19.296 02:52:13 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.296 02:52:13 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57974 00:04:19.296 02:52:13 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.296 02:52:13 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.296 02:52:13 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57974' 00:04:19.296 killing process with pid 57974 00:04:19.296 02:52:13 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57974 00:04:19.296 02:52:13 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57974 00:04:21.195 00:04:21.195 real 0m2.891s 00:04:21.195 user 0m5.219s 00:04:21.195 sys 0m0.427s 00:04:21.195 02:52:15 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.195 02:52:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:21.195 ************************************ 00:04:21.195 END TEST spdkcli_tcp 00:04:21.195 ************************************ 00:04:21.195 02:52:15 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:21.195 02:52:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.195 02:52:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.195 02:52:15 -- common/autotest_common.sh@10 -- # set +x 00:04:21.195 ************************************ 00:04:21.195 START TEST dpdk_mem_utility 00:04:21.195 ************************************ 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:21.195 * Looking for test storage... 00:04:21.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.195 02:52:15 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:21.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.195 --rc genhtml_branch_coverage=1 00:04:21.195 --rc genhtml_function_coverage=1 00:04:21.195 --rc genhtml_legend=1 00:04:21.195 --rc geninfo_all_blocks=1 00:04:21.195 --rc geninfo_unexecuted_blocks=1 00:04:21.195 00:04:21.195 ' 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:21.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.195 --rc genhtml_branch_coverage=1 00:04:21.195 --rc genhtml_function_coverage=1 00:04:21.195 --rc genhtml_legend=1 00:04:21.195 --rc geninfo_all_blocks=1 00:04:21.195 --rc geninfo_unexecuted_blocks=1 00:04:21.195 00:04:21.195 ' 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:21.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.195 --rc genhtml_branch_coverage=1 00:04:21.195 --rc genhtml_function_coverage=1 00:04:21.195 --rc genhtml_legend=1 00:04:21.195 --rc geninfo_all_blocks=1 00:04:21.195 --rc geninfo_unexecuted_blocks=1 00:04:21.195 00:04:21.195 ' 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:21.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.195 --rc genhtml_branch_coverage=1 00:04:21.195 --rc genhtml_function_coverage=1 00:04:21.195 --rc genhtml_legend=1 00:04:21.195 --rc geninfo_all_blocks=1 00:04:21.195 --rc geninfo_unexecuted_blocks=1 00:04:21.195 00:04:21.195 ' 00:04:21.195 02:52:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:21.195 02:52:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58079 00:04:21.195 02:52:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58079 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58079 ']' 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.195 02:52:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.195 02:52:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:21.195 [2024-12-10 02:52:15.413671] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:21.195 [2024-12-10 02:52:15.413800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58079 ] 00:04:21.195 [2024-12-10 02:52:15.573153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.453 [2024-12-10 02:52:15.675148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.020 02:52:16 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.020 02:52:16 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:22.020 02:52:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:22.020 02:52:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:22.020 02:52:16 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.020 02:52:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:22.020 { 00:04:22.020 "filename": "/tmp/spdk_mem_dump.txt" 00:04:22.020 } 00:04:22.020 02:52:16 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.020 02:52:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:22.020 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:22.020 1 heaps totaling size 824.000000 MiB 00:04:22.020 size: 824.000000 MiB heap id: 0 00:04:22.020 end heaps---------- 00:04:22.020 9 mempools totaling size 603.782043 MiB 00:04:22.020 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:22.020 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:22.021 size: 100.555481 MiB name: bdev_io_58079 00:04:22.021 size: 50.003479 MiB name: msgpool_58079 00:04:22.021 size: 36.509338 MiB name: fsdev_io_58079 00:04:22.021 size: 21.763794 MiB name: PDU_Pool 00:04:22.021 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:22.021 size: 4.133484 MiB name: evtpool_58079 00:04:22.021 size: 0.026123 MiB name: Session_Pool 00:04:22.021 end mempools------- 00:04:22.021 6 memzones totaling size 4.142822 MiB 00:04:22.021 size: 1.000366 MiB name: RG_ring_0_58079 00:04:22.021 size: 1.000366 MiB name: RG_ring_1_58079 00:04:22.021 size: 1.000366 MiB name: RG_ring_4_58079 00:04:22.021 size: 1.000366 MiB name: RG_ring_5_58079 00:04:22.021 size: 0.125366 MiB name: RG_ring_2_58079 00:04:22.021 size: 0.015991 MiB name: RG_ring_3_58079 00:04:22.021 end memzones------- 00:04:22.021 02:52:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:22.021 heap id: 0 total size: 824.000000 MiB number of busy elements: 326 number of free elements: 18 00:04:22.021 list of free elements. size: 16.778687 MiB 00:04:22.021 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:22.021 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:22.021 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:22.021 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:22.021 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:22.021 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:22.021 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:22.021 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:22.021 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:22.021 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:22.021 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:22.021 element at address: 0x20001b400000 with size: 0.559021 MiB 00:04:22.021 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:22.021 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:22.021 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:22.021 element at address: 0x200012c00000 with size: 0.433228 MiB 00:04:22.021 element at address: 0x200028800000 with size: 0.391663 MiB 00:04:22.021 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:22.021 list of standard malloc elements. size: 199.290405 MiB 00:04:22.021 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:22.021 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:22.021 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:22.021 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:22.021 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:22.021 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:22.021 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:22.021 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:22.021 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:22.021 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:22.021 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:22.021 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:22.021 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:22.021 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:22.021 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b48f1c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b48f2c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:22.022 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:22.023 element at address: 0x200028864440 with size: 0.000244 MiB 00:04:22.023 element at address: 0x200028864540 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886b200 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:22.023 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:22.023 list of memzone associated elements. size: 607.930908 MiB 00:04:22.023 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:22.023 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:22.023 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:22.023 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:22.023 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:22.023 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58079_0 00:04:22.023 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:22.023 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58079_0 00:04:22.023 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:22.023 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58079_0 00:04:22.023 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:22.023 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:22.023 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:22.023 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:22.023 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:22.023 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58079_0 00:04:22.023 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:22.023 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58079 00:04:22.023 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:22.023 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58079 00:04:22.023 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:22.023 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:22.023 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:22.023 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:22.023 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:22.023 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:22.023 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:22.023 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:22.023 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:22.023 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58079 00:04:22.023 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:22.023 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58079 00:04:22.023 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:22.023 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58079 00:04:22.023 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:22.023 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58079 00:04:22.023 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:22.023 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58079 00:04:22.023 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:22.023 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58079 00:04:22.023 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:22.023 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:22.023 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:22.023 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:22.023 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:22.023 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:22.023 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:22.023 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58079 00:04:22.023 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:22.023 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58079 00:04:22.023 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:22.023 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:22.023 element at address: 0x200028864640 with size: 0.023804 MiB 00:04:22.023 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:22.023 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:22.023 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58079 00:04:22.023 element at address: 0x20002886a7c0 with size: 0.002502 MiB 00:04:22.023 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:22.023 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:22.023 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58079 00:04:22.023 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:22.023 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58079 00:04:22.023 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:22.023 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58079 00:04:22.023 element at address: 0x20002886b300 with size: 0.000366 MiB 00:04:22.024 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:22.024 02:52:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:22.024 02:52:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58079 00:04:22.024 02:52:16 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58079 ']' 00:04:22.024 02:52:16 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58079 00:04:22.024 02:52:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:22.024 02:52:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.024 02:52:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58079 00:04:22.281 02:52:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.281 killing process with pid 58079 00:04:22.281 02:52:16 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.281 02:52:16 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58079' 00:04:22.281 02:52:16 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58079 00:04:22.281 02:52:16 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58079 00:04:23.654 00:04:23.654 real 0m2.743s 00:04:23.654 user 0m2.747s 00:04:23.654 sys 0m0.403s 00:04:23.654 02:52:17 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.654 02:52:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:23.654 ************************************ 00:04:23.654 END TEST dpdk_mem_utility 00:04:23.654 ************************************ 00:04:23.654 02:52:17 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:23.654 02:52:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.654 02:52:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.654 02:52:17 -- common/autotest_common.sh@10 -- # set +x 00:04:23.654 ************************************ 00:04:23.654 START TEST event 00:04:23.654 ************************************ 00:04:23.654 02:52:17 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:23.654 * Looking for test storage... 00:04:23.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:23.928 02:52:18 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:23.928 02:52:18 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:23.928 02:52:18 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:23.928 02:52:18 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:23.928 02:52:18 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.928 02:52:18 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.928 02:52:18 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.928 02:52:18 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.928 02:52:18 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.928 02:52:18 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.928 02:52:18 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.928 02:52:18 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.928 02:52:18 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.928 02:52:18 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.928 02:52:18 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.928 02:52:18 event -- scripts/common.sh@344 -- # case "$op" in 00:04:23.928 02:52:18 event -- scripts/common.sh@345 -- # : 1 00:04:23.928 02:52:18 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.928 02:52:18 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.928 02:52:18 event -- scripts/common.sh@365 -- # decimal 1 00:04:23.928 02:52:18 event -- scripts/common.sh@353 -- # local d=1 00:04:23.928 02:52:18 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.928 02:52:18 event -- scripts/common.sh@355 -- # echo 1 00:04:23.928 02:52:18 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.928 02:52:18 event -- scripts/common.sh@366 -- # decimal 2 00:04:23.928 02:52:18 event -- scripts/common.sh@353 -- # local d=2 00:04:23.928 02:52:18 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.928 02:52:18 event -- scripts/common.sh@355 -- # echo 2 00:04:23.928 02:52:18 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.928 02:52:18 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.928 02:52:18 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.928 02:52:18 event -- scripts/common.sh@368 -- # return 0 00:04:23.928 02:52:18 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.928 02:52:18 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:23.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.928 --rc genhtml_branch_coverage=1 00:04:23.928 --rc genhtml_function_coverage=1 00:04:23.928 --rc genhtml_legend=1 00:04:23.928 --rc geninfo_all_blocks=1 00:04:23.928 --rc geninfo_unexecuted_blocks=1 00:04:23.928 00:04:23.928 ' 00:04:23.928 02:52:18 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:23.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.928 --rc genhtml_branch_coverage=1 00:04:23.928 --rc genhtml_function_coverage=1 00:04:23.928 --rc genhtml_legend=1 00:04:23.928 --rc geninfo_all_blocks=1 00:04:23.928 --rc geninfo_unexecuted_blocks=1 00:04:23.928 00:04:23.928 ' 00:04:23.928 02:52:18 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:23.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.928 --rc genhtml_branch_coverage=1 00:04:23.928 --rc genhtml_function_coverage=1 00:04:23.928 --rc genhtml_legend=1 00:04:23.928 --rc geninfo_all_blocks=1 00:04:23.928 --rc geninfo_unexecuted_blocks=1 00:04:23.928 00:04:23.928 ' 00:04:23.928 02:52:18 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:23.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.928 --rc genhtml_branch_coverage=1 00:04:23.928 --rc genhtml_function_coverage=1 00:04:23.928 --rc genhtml_legend=1 00:04:23.928 --rc geninfo_all_blocks=1 00:04:23.928 --rc geninfo_unexecuted_blocks=1 00:04:23.928 00:04:23.928 ' 00:04:23.928 02:52:18 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:23.929 02:52:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:23.929 02:52:18 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:23.929 02:52:18 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:23.929 02:52:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.929 02:52:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:23.929 ************************************ 00:04:23.929 START TEST event_perf 00:04:23.929 ************************************ 00:04:23.929 02:52:18 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:23.929 Running I/O for 1 seconds...[2024-12-10 02:52:18.149002] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:23.929 [2024-12-10 02:52:18.149130] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58171 ] 00:04:23.929 [2024-12-10 02:52:18.307447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:24.187 [2024-12-10 02:52:18.394366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.187 [2024-12-10 02:52:18.394569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:24.187 [2024-12-10 02:52:18.394816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:24.187 [2024-12-10 02:52:18.394860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.563 Running I/O for 1 seconds... 00:04:25.563 lcore 0: 202639 00:04:25.563 lcore 1: 202641 00:04:25.563 lcore 2: 202641 00:04:25.563 lcore 3: 202641 00:04:25.563 done. 00:04:25.563 00:04:25.563 real 0m1.420s 00:04:25.563 user 0m4.221s 00:04:25.563 sys 0m0.082s 00:04:25.563 02:52:19 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.563 ************************************ 00:04:25.563 END TEST event_perf 00:04:25.563 ************************************ 00:04:25.563 02:52:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:25.563 02:52:19 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:25.563 02:52:19 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:25.563 02:52:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.563 02:52:19 event -- common/autotest_common.sh@10 -- # set +x 00:04:25.563 ************************************ 00:04:25.563 START TEST event_reactor 00:04:25.563 ************************************ 00:04:25.563 02:52:19 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:25.563 [2024-12-10 02:52:19.605568] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:25.563 [2024-12-10 02:52:19.605980] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58216 ] 00:04:25.563 [2024-12-10 02:52:19.764134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.563 [2024-12-10 02:52:19.875681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.937 test_start 00:04:26.937 oneshot 00:04:26.937 tick 100 00:04:26.937 tick 100 00:04:26.937 tick 250 00:04:26.937 tick 100 00:04:26.937 tick 100 00:04:26.937 tick 250 00:04:26.937 tick 100 00:04:26.937 tick 500 00:04:26.937 tick 100 00:04:26.937 tick 100 00:04:26.937 tick 250 00:04:26.937 tick 100 00:04:26.937 tick 100 00:04:26.937 test_end 00:04:26.937 ************************************ 00:04:26.937 END TEST event_reactor 00:04:26.937 ************************************ 00:04:26.937 00:04:26.937 real 0m1.448s 00:04:26.937 user 0m1.277s 00:04:26.937 sys 0m0.062s 00:04:26.937 02:52:21 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.937 02:52:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:26.937 02:52:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:26.937 02:52:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:26.937 02:52:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.937 02:52:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.937 ************************************ 00:04:26.937 START TEST event_reactor_perf 00:04:26.937 ************************************ 00:04:26.937 02:52:21 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:26.937 [2024-12-10 02:52:21.117437] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:26.937 [2024-12-10 02:52:21.117545] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58247 ] 00:04:26.937 [2024-12-10 02:52:21.276220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.195 [2024-12-10 02:52:21.376393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.568 test_start 00:04:28.568 test_end 00:04:28.568 Performance: 313222 events per second 00:04:28.568 ************************************ 00:04:28.568 END TEST event_reactor_perf 00:04:28.568 ************************************ 00:04:28.568 00:04:28.568 real 0m1.447s 00:04:28.568 user 0m1.268s 00:04:28.568 sys 0m0.072s 00:04:28.568 02:52:22 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.568 02:52:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:28.568 02:52:22 event -- event/event.sh@49 -- # uname -s 00:04:28.568 02:52:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:28.568 02:52:22 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:28.568 02:52:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.568 02:52:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.568 02:52:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.568 ************************************ 00:04:28.568 START TEST event_scheduler 00:04:28.568 ************************************ 00:04:28.568 02:52:22 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:28.568 * Looking for test storage... 00:04:28.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:28.568 02:52:22 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:28.568 02:52:22 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:28.568 02:52:22 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:28.568 02:52:22 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:28.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.568 02:52:22 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:28.568 02:52:22 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.568 02:52:22 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:28.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.568 --rc genhtml_branch_coverage=1 00:04:28.568 --rc genhtml_function_coverage=1 00:04:28.568 --rc genhtml_legend=1 00:04:28.568 --rc geninfo_all_blocks=1 00:04:28.568 --rc geninfo_unexecuted_blocks=1 00:04:28.568 00:04:28.568 ' 00:04:28.568 02:52:22 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:28.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.568 --rc genhtml_branch_coverage=1 00:04:28.568 --rc genhtml_function_coverage=1 00:04:28.568 --rc genhtml_legend=1 00:04:28.568 --rc geninfo_all_blocks=1 00:04:28.568 --rc geninfo_unexecuted_blocks=1 00:04:28.568 00:04:28.568 ' 00:04:28.568 02:52:22 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:28.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.568 --rc genhtml_branch_coverage=1 00:04:28.568 --rc genhtml_function_coverage=1 00:04:28.568 --rc genhtml_legend=1 00:04:28.569 --rc geninfo_all_blocks=1 00:04:28.569 --rc geninfo_unexecuted_blocks=1 00:04:28.569 00:04:28.569 ' 00:04:28.569 02:52:22 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:28.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.569 --rc genhtml_branch_coverage=1 00:04:28.569 --rc genhtml_function_coverage=1 00:04:28.569 --rc genhtml_legend=1 00:04:28.569 --rc geninfo_all_blocks=1 00:04:28.569 --rc geninfo_unexecuted_blocks=1 00:04:28.569 00:04:28.569 ' 00:04:28.569 02:52:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:28.569 02:52:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58323 00:04:28.569 02:52:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.569 02:52:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58323 00:04:28.569 02:52:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:28.569 02:52:22 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58323 ']' 00:04:28.569 02:52:22 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.569 02:52:22 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.569 02:52:22 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.569 02:52:22 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.569 02:52:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:28.569 [2024-12-10 02:52:22.771825] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:28.569 [2024-12-10 02:52:22.771926] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58323 ] 00:04:28.569 [2024-12-10 02:52:22.928675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:28.827 [2024-12-10 02:52:23.047992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.827 [2024-12-10 02:52:23.048213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.827 [2024-12-10 02:52:23.048319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:28.827 [2024-12-10 02:52:23.048333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.391 02:52:23 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.391 02:52:23 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:29.391 02:52:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:29.391 02:52:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.391 02:52:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.391 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:29.391 POWER: Cannot set governor of lcore 0 to userspace 00:04:29.392 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:29.392 POWER: Cannot set governor of lcore 0 to performance 00:04:29.392 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:29.392 POWER: Cannot set governor of lcore 0 to userspace 00:04:29.392 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:29.392 POWER: Cannot set governor of lcore 0 to userspace 00:04:29.392 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:29.392 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:29.392 POWER: Unable to set Power Management Environment for lcore 0 00:04:29.392 [2024-12-10 02:52:23.698600] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:29.392 [2024-12-10 02:52:23.698619] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:29.392 [2024-12-10 02:52:23.698628] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:29.392 [2024-12-10 02:52:23.698645] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:29.392 [2024-12-10 02:52:23.698653] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:29.392 [2024-12-10 02:52:23.698662] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:29.392 02:52:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.392 02:52:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:29.392 02:52:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.392 02:52:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.650 [2024-12-10 02:52:23.931196] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:29.650 02:52:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.650 02:52:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:29.650 02:52:23 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.650 02:52:23 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.650 02:52:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.650 ************************************ 00:04:29.650 START TEST scheduler_create_thread 00:04:29.650 ************************************ 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.650 2 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.650 3 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.650 4 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.650 5 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.650 6 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.650 7 00:04:29.650 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.651 02:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:29.651 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.651 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.651 8 00:04:29.651 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.651 02:52:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:29.651 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.651 02:52:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.651 9 00:04:29.651 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.651 02:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:29.651 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.651 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.651 10 00:04:29.651 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.651 02:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:29.651 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.651 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.651 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.651 02:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:29.651 02:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:29.651 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.651 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.908 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.908 02:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:29.908 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.908 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.908 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.908 02:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:29.908 02:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:29.908 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.908 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.908 ************************************ 00:04:29.908 END TEST scheduler_create_thread 00:04:29.908 ************************************ 00:04:29.908 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.908 00:04:29.908 real 0m0.107s 00:04:29.908 user 0m0.010s 00:04:29.908 sys 0m0.004s 00:04:29.908 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.908 02:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.908 02:52:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:29.908 02:52:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58323 00:04:29.908 02:52:24 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58323 ']' 00:04:29.908 02:52:24 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58323 00:04:29.908 02:52:24 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:29.908 02:52:24 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.908 02:52:24 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58323 00:04:29.908 killing process with pid 58323 00:04:29.908 02:52:24 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:29.908 02:52:24 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:29.908 02:52:24 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58323' 00:04:29.908 02:52:24 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58323 00:04:29.908 02:52:24 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58323 00:04:30.214 [2024-12-10 02:52:24.532000] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:31.147 00:04:31.147 real 0m2.710s 00:04:31.147 user 0m4.786s 00:04:31.147 sys 0m0.332s 00:04:31.147 02:52:25 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.147 02:52:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:31.147 ************************************ 00:04:31.147 END TEST event_scheduler 00:04:31.147 ************************************ 00:04:31.147 02:52:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:31.147 02:52:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:31.147 02:52:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.147 02:52:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.147 02:52:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.147 ************************************ 00:04:31.147 START TEST app_repeat 00:04:31.147 ************************************ 00:04:31.147 02:52:25 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:31.147 02:52:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.147 02:52:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.147 02:52:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:31.147 02:52:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.147 02:52:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:31.147 02:52:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:31.147 02:52:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:31.147 Process app_repeat pid: 58396 00:04:31.147 spdk_app_start Round 0 00:04:31.147 02:52:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58396 00:04:31.147 02:52:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.147 02:52:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58396' 00:04:31.147 02:52:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:31.147 02:52:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:31.147 02:52:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58396 /var/tmp/spdk-nbd.sock 00:04:31.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:31.147 02:52:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58396 ']' 00:04:31.147 02:52:25 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:31.147 02:52:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:31.147 02:52:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.147 02:52:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:31.147 02:52:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.147 02:52:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:31.147 [2024-12-10 02:52:25.395167] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:31.147 [2024-12-10 02:52:25.395345] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58396 ] 00:04:31.405 [2024-12-10 02:52:25.558434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.405 [2024-12-10 02:52:25.668514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.405 [2024-12-10 02:52:25.668531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.970 02:52:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.970 02:52:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:31.970 02:52:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.228 Malloc0 00:04:32.228 02:52:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.486 Malloc1 00:04:32.486 02:52:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.486 02:52:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.486 02:52:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.486 02:52:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:32.486 02:52:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.486 02:52:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:32.486 02:52:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.486 02:52:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.486 02:52:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.486 02:52:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:32.486 02:52:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.486 02:52:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:32.486 02:52:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:32.486 02:52:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:32.486 02:52:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.486 02:52:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:32.744 /dev/nbd0 00:04:32.744 02:52:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:32.744 02:52:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:32.744 02:52:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:32.744 02:52:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:32.744 02:52:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:32.744 02:52:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:32.744 02:52:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:32.744 02:52:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:32.744 02:52:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:32.744 02:52:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:32.744 02:52:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:32.744 1+0 records in 00:04:32.744 1+0 records out 00:04:32.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426946 s, 9.6 MB/s 00:04:32.744 02:52:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:32.744 02:52:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:32.744 02:52:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:32.744 02:52:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:32.744 02:52:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:32.744 02:52:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:32.744 02:52:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.745 02:52:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:33.003 /dev/nbd1 00:04:33.003 02:52:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:33.003 02:52:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:33.003 02:52:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:33.003 02:52:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:33.003 02:52:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:33.003 02:52:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:33.003 02:52:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:33.003 02:52:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:33.003 02:52:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:33.003 02:52:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:33.003 02:52:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.003 1+0 records in 00:04:33.003 1+0 records out 00:04:33.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229553 s, 17.8 MB/s 00:04:33.003 02:52:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:33.003 02:52:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:33.003 02:52:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:33.003 02:52:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:33.003 02:52:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:33.003 02:52:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.003 02:52:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.003 02:52:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:33.003 02:52:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.003 02:52:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:33.262 { 00:04:33.262 "nbd_device": "/dev/nbd0", 00:04:33.262 "bdev_name": "Malloc0" 00:04:33.262 }, 00:04:33.262 { 00:04:33.262 "nbd_device": "/dev/nbd1", 00:04:33.262 "bdev_name": "Malloc1" 00:04:33.262 } 00:04:33.262 ]' 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:33.262 { 00:04:33.262 "nbd_device": "/dev/nbd0", 00:04:33.262 "bdev_name": "Malloc0" 00:04:33.262 }, 00:04:33.262 { 00:04:33.262 "nbd_device": "/dev/nbd1", 00:04:33.262 "bdev_name": "Malloc1" 00:04:33.262 } 00:04:33.262 ]' 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:33.262 /dev/nbd1' 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:33.262 /dev/nbd1' 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:33.262 256+0 records in 00:04:33.262 256+0 records out 00:04:33.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.005164 s, 203 MB/s 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:33.262 256+0 records in 00:04:33.262 256+0 records out 00:04:33.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189083 s, 55.5 MB/s 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:33.262 256+0 records in 00:04:33.262 256+0 records out 00:04:33.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196275 s, 53.4 MB/s 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:33.262 02:52:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:33.263 02:52:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.263 02:52:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.263 02:52:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:33.263 02:52:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:33.263 02:52:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:33.263 02:52:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:33.521 02:52:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:33.521 02:52:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:33.521 02:52:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:33.521 02:52:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:33.522 02:52:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:33.522 02:52:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:33.522 02:52:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:33.522 02:52:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:33.522 02:52:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:33.522 02:52:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:33.780 02:52:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:33.780 02:52:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:33.780 02:52:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:33.780 02:52:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:33.780 02:52:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:33.780 02:52:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:33.780 02:52:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:33.780 02:52:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:33.780 02:52:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:33.780 02:52:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.780 02:52:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.038 02:52:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:34.038 02:52:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.038 02:52:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:34.038 02:52:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:34.038 02:52:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:34.038 02:52:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.038 02:52:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:34.038 02:52:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:34.038 02:52:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:34.038 02:52:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:34.038 02:52:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:34.038 02:52:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:34.038 02:52:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:34.296 02:52:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:35.232 [2024-12-10 02:52:29.272959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.232 [2024-12-10 02:52:29.372473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.232 [2024-12-10 02:52:29.372479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.232 [2024-12-10 02:52:29.502548] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:35.232 [2024-12-10 02:52:29.502627] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:37.761 spdk_app_start Round 1 00:04:37.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:37.761 02:52:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:37.761 02:52:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:37.761 02:52:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58396 /var/tmp/spdk-nbd.sock 00:04:37.761 02:52:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58396 ']' 00:04:37.761 02:52:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:37.761 02:52:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.761 02:52:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:37.762 02:52:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.762 02:52:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:37.762 02:52:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.762 02:52:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:37.762 02:52:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.762 Malloc0 00:04:37.762 02:52:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.019 Malloc1 00:04:38.019 02:52:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.019 02:52:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.019 02:52:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.019 02:52:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:38.019 02:52:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.019 02:52:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:38.019 02:52:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.020 02:52:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.020 02:52:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.020 02:52:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:38.020 02:52:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.020 02:52:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:38.020 02:52:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:38.020 02:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:38.020 02:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.020 02:52:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:38.279 /dev/nbd0 00:04:38.279 02:52:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:38.279 02:52:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:38.279 02:52:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:38.279 02:52:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:38.279 02:52:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:38.279 02:52:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:38.279 02:52:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:38.279 02:52:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:38.279 02:52:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:38.279 02:52:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:38.279 02:52:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:38.279 1+0 records in 00:04:38.279 1+0 records out 00:04:38.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189313 s, 21.6 MB/s 00:04:38.279 02:52:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:38.279 02:52:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:38.279 02:52:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:38.280 02:52:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:38.280 02:52:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:38.280 02:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:38.280 02:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.280 02:52:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:38.541 /dev/nbd1 00:04:38.541 02:52:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:38.541 02:52:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:38.541 02:52:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:38.541 02:52:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:38.541 02:52:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:38.541 02:52:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:38.541 02:52:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:38.541 02:52:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:38.541 02:52:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:38.541 02:52:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:38.541 02:52:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:38.541 1+0 records in 00:04:38.541 1+0 records out 00:04:38.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238219 s, 17.2 MB/s 00:04:38.541 02:52:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:38.541 02:52:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:38.541 02:52:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:38.541 02:52:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:38.541 02:52:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:38.541 02:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:38.541 02:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.541 02:52:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:38.541 02:52:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.541 02:52:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:38.801 { 00:04:38.801 "nbd_device": "/dev/nbd0", 00:04:38.801 "bdev_name": "Malloc0" 00:04:38.801 }, 00:04:38.801 { 00:04:38.801 "nbd_device": "/dev/nbd1", 00:04:38.801 "bdev_name": "Malloc1" 00:04:38.801 } 00:04:38.801 ]' 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:38.801 { 00:04:38.801 "nbd_device": "/dev/nbd0", 00:04:38.801 "bdev_name": "Malloc0" 00:04:38.801 }, 00:04:38.801 { 00:04:38.801 "nbd_device": "/dev/nbd1", 00:04:38.801 "bdev_name": "Malloc1" 00:04:38.801 } 00:04:38.801 ]' 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:38.801 /dev/nbd1' 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:38.801 /dev/nbd1' 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:38.801 256+0 records in 00:04:38.801 256+0 records out 00:04:38.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00747805 s, 140 MB/s 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:38.801 256+0 records in 00:04:38.801 256+0 records out 00:04:38.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014925 s, 70.3 MB/s 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:38.801 256+0 records in 00:04:38.801 256+0 records out 00:04:38.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171714 s, 61.1 MB/s 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.801 02:52:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:39.060 02:52:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:39.060 02:52:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:39.060 02:52:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:39.060 02:52:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.060 02:52:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.060 02:52:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:39.060 02:52:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:39.060 02:52:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.060 02:52:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.060 02:52:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:39.318 02:52:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:39.318 02:52:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:39.318 02:52:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:39.318 02:52:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.318 02:52:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.318 02:52:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:39.318 02:52:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:39.318 02:52:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.318 02:52:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.318 02:52:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.318 02:52:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.576 02:52:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:39.576 02:52:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:39.576 02:52:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.576 02:52:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:39.576 02:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.576 02:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:39.576 02:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:39.576 02:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:39.576 02:52:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:39.576 02:52:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:39.576 02:52:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:39.576 02:52:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:39.576 02:52:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:39.834 02:52:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:40.401 [2024-12-10 02:52:34.682282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.401 [2024-12-10 02:52:34.766821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.401 [2024-12-10 02:52:34.766988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.658 [2024-12-10 02:52:34.867626] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:40.658 [2024-12-10 02:52:34.867714] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:43.188 spdk_app_start Round 2 00:04:43.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:43.188 02:52:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:43.188 02:52:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:43.188 02:52:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58396 /var/tmp/spdk-nbd.sock 00:04:43.188 02:52:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58396 ']' 00:04:43.188 02:52:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:43.188 02:52:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.188 02:52:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:43.188 02:52:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.188 02:52:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:43.188 02:52:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.188 02:52:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:43.188 02:52:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.188 Malloc0 00:04:43.188 02:52:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.446 Malloc1 00:04:43.446 02:52:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.446 02:52:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.446 02:52:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.446 02:52:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:43.446 02:52:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.446 02:52:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:43.446 02:52:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.446 02:52:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.446 02:52:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.446 02:52:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:43.447 02:52:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.447 02:52:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:43.447 02:52:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:43.447 02:52:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:43.447 02:52:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.447 02:52:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:43.705 /dev/nbd0 00:04:43.705 02:52:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:43.705 02:52:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:43.705 02:52:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:43.705 02:52:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:43.705 02:52:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:43.705 02:52:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:43.705 02:52:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:43.705 02:52:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:43.705 02:52:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:43.705 02:52:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:43.705 02:52:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.705 1+0 records in 00:04:43.705 1+0 records out 00:04:43.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327618 s, 12.5 MB/s 00:04:43.705 02:52:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.705 02:52:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:43.705 02:52:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.705 02:52:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:43.705 02:52:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:43.705 02:52:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.705 02:52:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.705 02:52:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:43.963 /dev/nbd1 00:04:43.963 02:52:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:43.963 02:52:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:43.963 02:52:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:43.963 02:52:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:43.963 02:52:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:43.963 02:52:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:43.963 02:52:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:43.963 02:52:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:43.963 02:52:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:43.963 02:52:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:43.963 02:52:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.963 1+0 records in 00:04:43.963 1+0 records out 00:04:43.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182109 s, 22.5 MB/s 00:04:43.963 02:52:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.963 02:52:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:43.963 02:52:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.963 02:52:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:43.963 02:52:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:43.963 02:52:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.963 02:52:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.963 02:52:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.963 02:52:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.963 02:52:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:44.222 { 00:04:44.222 "nbd_device": "/dev/nbd0", 00:04:44.222 "bdev_name": "Malloc0" 00:04:44.222 }, 00:04:44.222 { 00:04:44.222 "nbd_device": "/dev/nbd1", 00:04:44.222 "bdev_name": "Malloc1" 00:04:44.222 } 00:04:44.222 ]' 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:44.222 { 00:04:44.222 "nbd_device": "/dev/nbd0", 00:04:44.222 "bdev_name": "Malloc0" 00:04:44.222 }, 00:04:44.222 { 00:04:44.222 "nbd_device": "/dev/nbd1", 00:04:44.222 "bdev_name": "Malloc1" 00:04:44.222 } 00:04:44.222 ]' 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:44.222 /dev/nbd1' 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:44.222 /dev/nbd1' 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:44.222 256+0 records in 00:04:44.222 256+0 records out 00:04:44.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111787 s, 93.8 MB/s 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:44.222 256+0 records in 00:04:44.222 256+0 records out 00:04:44.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171496 s, 61.1 MB/s 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:44.222 256+0 records in 00:04:44.222 256+0 records out 00:04:44.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164603 s, 63.7 MB/s 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.222 02:52:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:44.480 02:52:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:44.480 02:52:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:44.480 02:52:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:44.480 02:52:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.480 02:52:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.480 02:52:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:44.480 02:52:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:44.480 02:52:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.480 02:52:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.480 02:52:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:44.738 02:52:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:44.738 02:52:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:44.738 02:52:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:44.738 02:52:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.738 02:52:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.738 02:52:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:44.738 02:52:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:44.738 02:52:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.738 02:52:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.738 02:52:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.738 02:52:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.995 02:52:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:44.995 02:52:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:44.995 02:52:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.995 02:52:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:44.995 02:52:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:44.995 02:52:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.995 02:52:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:44.995 02:52:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:44.995 02:52:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:44.995 02:52:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:44.995 02:52:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:44.995 02:52:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:44.995 02:52:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:45.254 02:52:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:45.821 [2024-12-10 02:52:40.163093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.084 [2024-12-10 02:52:40.246043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.084 [2024-12-10 02:52:40.246171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.084 [2024-12-10 02:52:40.348549] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:46.084 [2024-12-10 02:52:40.348623] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:48.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:48.628 02:52:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58396 /var/tmp/spdk-nbd.sock 00:04:48.628 02:52:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58396 ']' 00:04:48.628 02:52:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:48.628 02:52:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.628 02:52:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:48.628 02:52:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.628 02:52:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:48.628 02:52:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.628 02:52:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:48.628 02:52:42 event.app_repeat -- event/event.sh@39 -- # killprocess 58396 00:04:48.628 02:52:42 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58396 ']' 00:04:48.628 02:52:42 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58396 00:04:48.628 02:52:42 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:48.629 02:52:42 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.629 02:52:42 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58396 00:04:48.629 killing process with pid 58396 00:04:48.629 02:52:42 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.629 02:52:42 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.629 02:52:42 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58396' 00:04:48.629 02:52:42 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58396 00:04:48.629 02:52:42 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58396 00:04:49.195 spdk_app_start is called in Round 0. 00:04:49.195 Shutdown signal received, stop current app iteration 00:04:49.195 Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 reinitialization... 00:04:49.195 spdk_app_start is called in Round 1. 00:04:49.195 Shutdown signal received, stop current app iteration 00:04:49.195 Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 reinitialization... 00:04:49.195 spdk_app_start is called in Round 2. 00:04:49.195 Shutdown signal received, stop current app iteration 00:04:49.195 Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 reinitialization... 00:04:49.195 spdk_app_start is called in Round 3. 00:04:49.195 Shutdown signal received, stop current app iteration 00:04:49.195 02:52:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:49.195 02:52:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:49.195 00:04:49.195 real 0m18.028s 00:04:49.195 user 0m39.510s 00:04:49.195 sys 0m2.173s 00:04:49.195 ************************************ 00:04:49.195 END TEST app_repeat 00:04:49.195 ************************************ 00:04:49.195 02:52:43 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.195 02:52:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.195 02:52:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:49.195 02:52:43 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:49.195 02:52:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.195 02:52:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.195 02:52:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.195 ************************************ 00:04:49.195 START TEST cpu_locks 00:04:49.195 ************************************ 00:04:49.195 02:52:43 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:49.195 * Looking for test storage... 00:04:49.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:49.195 02:52:43 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:49.195 02:52:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:49.195 02:52:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:49.195 02:52:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.195 02:52:43 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.196 02:52:43 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.196 02:52:43 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:49.196 02:52:43 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.196 02:52:43 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:49.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.196 --rc genhtml_branch_coverage=1 00:04:49.196 --rc genhtml_function_coverage=1 00:04:49.196 --rc genhtml_legend=1 00:04:49.196 --rc geninfo_all_blocks=1 00:04:49.196 --rc geninfo_unexecuted_blocks=1 00:04:49.196 00:04:49.196 ' 00:04:49.196 02:52:43 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:49.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.196 --rc genhtml_branch_coverage=1 00:04:49.196 --rc genhtml_function_coverage=1 00:04:49.196 --rc genhtml_legend=1 00:04:49.196 --rc geninfo_all_blocks=1 00:04:49.196 --rc geninfo_unexecuted_blocks=1 00:04:49.196 00:04:49.196 ' 00:04:49.196 02:52:43 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:49.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.196 --rc genhtml_branch_coverage=1 00:04:49.196 --rc genhtml_function_coverage=1 00:04:49.196 --rc genhtml_legend=1 00:04:49.196 --rc geninfo_all_blocks=1 00:04:49.196 --rc geninfo_unexecuted_blocks=1 00:04:49.196 00:04:49.196 ' 00:04:49.196 02:52:43 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:49.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.196 --rc genhtml_branch_coverage=1 00:04:49.196 --rc genhtml_function_coverage=1 00:04:49.196 --rc genhtml_legend=1 00:04:49.196 --rc geninfo_all_blocks=1 00:04:49.196 --rc geninfo_unexecuted_blocks=1 00:04:49.196 00:04:49.196 ' 00:04:49.196 02:52:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:49.196 02:52:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:49.196 02:52:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:49.196 02:52:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:49.196 02:52:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.196 02:52:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.196 02:52:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.196 ************************************ 00:04:49.196 START TEST default_locks 00:04:49.196 ************************************ 00:04:49.196 02:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:49.196 02:52:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58833 00:04:49.196 02:52:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58833 00:04:49.196 02:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58833 ']' 00:04:49.196 02:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.196 02:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.196 02:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.196 02:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.196 02:52:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.196 02:52:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.454 [2024-12-10 02:52:43.646217] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:49.454 [2024-12-10 02:52:43.646343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58833 ] 00:04:49.454 [2024-12-10 02:52:43.801402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.712 [2024-12-10 02:52:43.888448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.309 02:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.309 02:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:50.309 02:52:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58833 00:04:50.309 02:52:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58833 00:04:50.309 02:52:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.602 02:52:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58833 00:04:50.602 02:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58833 ']' 00:04:50.602 02:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58833 00:04:50.602 02:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:50.602 02:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.602 02:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58833 00:04:50.602 02:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.602 killing process with pid 58833 00:04:50.603 02:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.603 02:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58833' 00:04:50.603 02:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58833 00:04:50.603 02:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58833 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58833 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58833 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58833 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58833 ']' 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.536 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58833) - No such process 00:04:51.536 ERROR: process (pid: 58833) is no longer running 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:51.536 00:04:51.536 real 0m2.347s 00:04:51.536 user 0m2.346s 00:04:51.536 sys 0m0.447s 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.536 02:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.536 ************************************ 00:04:51.536 END TEST default_locks 00:04:51.536 ************************************ 00:04:51.797 02:52:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:51.797 02:52:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.797 02:52:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.797 02:52:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.797 ************************************ 00:04:51.797 START TEST default_locks_via_rpc 00:04:51.797 ************************************ 00:04:51.797 02:52:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:51.797 02:52:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58886 00:04:51.797 02:52:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.797 02:52:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58886 00:04:51.797 02:52:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58886 ']' 00:04:51.797 02:52:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.797 02:52:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.797 02:52:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.797 02:52:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.797 02:52:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.797 [2024-12-10 02:52:46.022219] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:51.797 [2024-12-10 02:52:46.022319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58886 ] 00:04:51.797 [2024-12-10 02:52:46.173255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.059 [2024-12-10 02:52:46.258707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58886 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58886 00:04:52.630 02:52:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.891 02:52:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58886 00:04:52.891 02:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58886 ']' 00:04:52.891 02:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58886 00:04:52.891 02:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:52.891 02:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.891 02:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58886 00:04:52.891 02:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.891 killing process with pid 58886 00:04:52.891 02:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.891 02:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58886' 00:04:52.891 02:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58886 00:04:52.891 02:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58886 00:04:54.275 00:04:54.275 real 0m2.352s 00:04:54.275 user 0m2.377s 00:04:54.275 sys 0m0.405s 00:04:54.275 02:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.275 02:52:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.275 ************************************ 00:04:54.275 END TEST default_locks_via_rpc 00:04:54.275 ************************************ 00:04:54.275 02:52:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:54.275 02:52:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.275 02:52:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.275 02:52:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.275 ************************************ 00:04:54.275 START TEST non_locking_app_on_locked_coremask 00:04:54.275 ************************************ 00:04:54.275 02:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:54.275 02:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58949 00:04:54.275 02:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58949 /var/tmp/spdk.sock 00:04:54.275 02:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58949 ']' 00:04:54.275 02:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.275 02:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.275 02:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.275 02:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.275 02:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.275 02:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.275 [2024-12-10 02:52:48.414054] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:54.275 [2024-12-10 02:52:48.414158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58949 ] 00:04:54.275 [2024-12-10 02:52:48.565614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.275 [2024-12-10 02:52:48.649802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.232 02:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.232 02:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:55.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.232 02:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58965 00:04:55.232 02:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58965 /var/tmp/spdk2.sock 00:04:55.232 02:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:55.232 02:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58965 ']' 00:04:55.232 02:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.232 02:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.232 02:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.232 02:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.232 02:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.232 [2024-12-10 02:52:49.375283] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:55.232 [2024-12-10 02:52:49.375576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58965 ] 00:04:55.232 [2024-12-10 02:52:49.540532] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:55.232 [2024-12-10 02:52:49.540582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.491 [2024-12-10 02:52:49.708319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.433 02:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.433 02:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:56.433 02:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58949 00:04:56.433 02:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58949 00:04:56.433 02:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.693 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58949 00:04:56.693 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58949 ']' 00:04:56.693 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58949 00:04:56.693 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:56.693 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.693 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58949 00:04:56.954 killing process with pid 58949 00:04:56.954 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.954 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.954 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58949' 00:04:56.954 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58949 00:04:56.954 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58949 00:04:59.569 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58965 00:04:59.569 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58965 ']' 00:04:59.569 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58965 00:04:59.569 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:59.569 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.569 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58965 00:04:59.569 killing process with pid 58965 00:04:59.569 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.569 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.569 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58965' 00:04:59.569 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58965 00:04:59.569 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58965 00:05:00.511 00:05:00.511 real 0m6.420s 00:05:00.511 user 0m6.722s 00:05:00.511 sys 0m0.858s 00:05:00.511 02:52:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.511 ************************************ 00:05:00.511 END TEST non_locking_app_on_locked_coremask 00:05:00.511 ************************************ 00:05:00.511 02:52:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.511 02:52:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:00.511 02:52:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.511 02:52:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.511 02:52:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.511 ************************************ 00:05:00.511 START TEST locking_app_on_unlocked_coremask 00:05:00.511 ************************************ 00:05:00.511 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:00.511 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59056 00:05:00.511 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59056 /var/tmp/spdk.sock 00:05:00.511 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59056 ']' 00:05:00.511 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.511 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.511 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.511 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.511 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.511 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:00.511 [2024-12-10 02:52:54.869276] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:00.511 [2024-12-10 02:52:54.869492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59056 ] 00:05:00.772 [2024-12-10 02:52:55.021012] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:00.772 [2024-12-10 02:52:55.021045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.772 [2024-12-10 02:52:55.101568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.344 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.344 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:01.344 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:01.344 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59072 00:05:01.344 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59072 /var/tmp/spdk2.sock 00:05:01.344 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59072 ']' 00:05:01.344 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.344 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.344 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.344 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.344 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.606 [2024-12-10 02:52:55.772579] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:01.606 [2024-12-10 02:52:55.772861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59072 ] 00:05:01.606 [2024-12-10 02:52:55.934671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.908 [2024-12-10 02:52:56.102259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.847 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.847 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:02.847 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59072 00:05:02.847 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.847 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59072 00:05:03.109 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59056 00:05:03.109 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59056 ']' 00:05:03.109 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59056 00:05:03.109 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:03.109 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.109 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59056 00:05:03.109 killing process with pid 59056 00:05:03.109 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.109 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.109 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59056' 00:05:03.109 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59056 00:05:03.109 02:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59056 00:05:05.725 02:52:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59072 00:05:05.725 02:52:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59072 ']' 00:05:05.725 02:52:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59072 00:05:05.725 02:52:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:05.725 02:52:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.725 02:52:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59072 00:05:05.725 killing process with pid 59072 00:05:05.725 02:52:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.725 02:52:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.725 02:52:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59072' 00:05:05.725 02:52:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59072 00:05:05.725 02:52:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59072 00:05:07.117 00:05:07.117 real 0m6.283s 00:05:07.117 user 0m6.534s 00:05:07.117 sys 0m0.822s 00:05:07.117 02:53:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.117 02:53:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.117 ************************************ 00:05:07.117 END TEST locking_app_on_unlocked_coremask 00:05:07.117 ************************************ 00:05:07.117 02:53:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:07.117 02:53:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.117 02:53:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.117 02:53:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.117 ************************************ 00:05:07.117 START TEST locking_app_on_locked_coremask 00:05:07.117 ************************************ 00:05:07.117 02:53:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:07.117 02:53:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.117 02:53:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59163 00:05:07.117 02:53:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59163 /var/tmp/spdk.sock 00:05:07.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.117 02:53:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59163 ']' 00:05:07.117 02:53:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.117 02:53:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.117 02:53:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.117 02:53:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.117 02:53:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.117 [2024-12-10 02:53:01.227705] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:07.117 [2024-12-10 02:53:01.227992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59163 ] 00:05:07.117 [2024-12-10 02:53:01.390006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.377 [2024-12-10 02:53:01.502011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59179 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59179 /var/tmp/spdk2.sock 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59179 /var/tmp/spdk2.sock 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59179 /var/tmp/spdk2.sock 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59179 ']' 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.950 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.210 [2024-12-10 02:53:02.400351] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:08.210 [2024-12-10 02:53:02.400943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59179 ] 00:05:08.470 [2024-12-10 02:53:02.599006] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59163 has claimed it. 00:05:08.470 [2024-12-10 02:53:02.599073] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:08.730 ERROR: process (pid: 59179) is no longer running 00:05:08.730 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59179) - No such process 00:05:08.730 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.730 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:08.730 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:08.730 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:08.730 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:08.730 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:08.731 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59163 00:05:08.731 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59163 00:05:08.731 02:53:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.991 02:53:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59163 00:05:08.991 02:53:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59163 ']' 00:05:08.991 02:53:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59163 00:05:08.991 02:53:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:08.991 02:53:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.991 02:53:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59163 00:05:08.991 killing process with pid 59163 00:05:08.991 02:53:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.991 02:53:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.991 02:53:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59163' 00:05:08.991 02:53:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59163 00:05:08.991 02:53:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59163 00:05:10.922 00:05:10.922 real 0m3.630s 00:05:10.922 user 0m3.865s 00:05:10.922 sys 0m0.695s 00:05:10.922 02:53:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.922 ************************************ 00:05:10.922 END TEST locking_app_on_locked_coremask 00:05:10.922 ************************************ 00:05:10.922 02:53:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.922 02:53:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:10.922 02:53:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.922 02:53:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.922 02:53:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.922 ************************************ 00:05:10.922 START TEST locking_overlapped_coremask 00:05:10.922 ************************************ 00:05:10.922 02:53:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:10.922 02:53:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59243 00:05:10.922 02:53:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59243 /var/tmp/spdk.sock 00:05:10.922 02:53:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:10.922 02:53:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59243 ']' 00:05:10.922 02:53:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.922 02:53:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.922 02:53:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.922 02:53:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.922 02:53:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.922 [2024-12-10 02:53:04.900184] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:10.922 [2024-12-10 02:53:04.900309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59243 ] 00:05:10.922 [2024-12-10 02:53:05.058084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:10.922 [2024-12-10 02:53:05.176717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.922 [2024-12-10 02:53:05.177293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.922 [2024-12-10 02:53:05.177475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59261 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59261 /var/tmp/spdk2.sock 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59261 /var/tmp/spdk2.sock 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59261 /var/tmp/spdk2.sock 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59261 ']' 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.491 02:53:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.752 [2024-12-10 02:53:05.889977] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:11.752 [2024-12-10 02:53:05.890625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59261 ] 00:05:11.752 [2024-12-10 02:53:06.072229] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59243 has claimed it. 00:05:11.752 [2024-12-10 02:53:06.072297] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:12.320 ERROR: process (pid: 59261) is no longer running 00:05:12.320 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59261) - No such process 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59243 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59243 ']' 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59243 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59243 00:05:12.320 killing process with pid 59243 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59243' 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59243 00:05:12.320 02:53:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59243 00:05:14.230 ************************************ 00:05:14.230 END TEST locking_overlapped_coremask 00:05:14.230 ************************************ 00:05:14.230 00:05:14.230 real 0m3.477s 00:05:14.230 user 0m9.482s 00:05:14.230 sys 0m0.464s 00:05:14.230 02:53:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.230 02:53:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.230 02:53:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:14.230 02:53:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.230 02:53:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.230 02:53:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.230 ************************************ 00:05:14.230 START TEST locking_overlapped_coremask_via_rpc 00:05:14.230 ************************************ 00:05:14.230 02:53:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:14.230 02:53:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59314 00:05:14.230 02:53:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59314 /var/tmp/spdk.sock 00:05:14.230 02:53:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:14.230 02:53:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59314 ']' 00:05:14.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.230 02:53:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.230 02:53:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.230 02:53:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.230 02:53:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.230 02:53:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.230 [2024-12-10 02:53:08.453489] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:14.230 [2024-12-10 02:53:08.453637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59314 ] 00:05:14.490 [2024-12-10 02:53:08.620666] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.490 [2024-12-10 02:53:08.620738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:14.490 [2024-12-10 02:53:08.753452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.490 [2024-12-10 02:53:08.753676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.490 [2024-12-10 02:53:08.753767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.064 02:53:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.064 02:53:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:15.064 02:53:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59332 00:05:15.064 02:53:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59332 /var/tmp/spdk2.sock 00:05:15.064 02:53:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59332 ']' 00:05:15.064 02:53:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:15.064 02:53:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.064 02:53:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.064 02:53:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.064 02:53:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.064 02:53:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.325 [2024-12-10 02:53:09.511083] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:15.325 [2024-12-10 02:53:09.511759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59332 ] 00:05:15.325 [2024-12-10 02:53:09.691585] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.325 [2024-12-10 02:53:09.691658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.896 [2024-12-10 02:53:09.973250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.896 [2024-12-10 02:53:09.976604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.896 [2024-12-10 02:53:09.976632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:17.800 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.800 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.800 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:17.800 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.800 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.800 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.800 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.800 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:17.800 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.800 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:17.800 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.800 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:17.800 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.801 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.801 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.801 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.801 [2024-12-10 02:53:12.115633] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59314 has claimed it. 00:05:17.801 request: 00:05:17.801 { 00:05:17.801 "method": "framework_enable_cpumask_locks", 00:05:17.801 "req_id": 1 00:05:17.801 } 00:05:17.801 Got JSON-RPC error response 00:05:17.801 response: 00:05:17.801 { 00:05:17.801 "code": -32603, 00:05:17.801 "message": "Failed to claim CPU core: 2" 00:05:17.801 } 00:05:17.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.801 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:17.801 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:17.801 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.801 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:17.801 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.801 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59314 /var/tmp/spdk.sock 00:05:17.801 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59314 ']' 00:05:17.801 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.801 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.801 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.801 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.801 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.062 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.062 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.062 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59332 /var/tmp/spdk2.sock 00:05:18.062 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59332 ']' 00:05:18.062 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.062 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.062 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.062 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.062 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.324 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.324 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.324 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:18.324 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:18.324 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:18.324 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:18.324 00:05:18.324 real 0m4.223s 00:05:18.324 user 0m1.337s 00:05:18.324 sys 0m0.184s 00:05:18.324 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.324 02:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.324 ************************************ 00:05:18.324 END TEST locking_overlapped_coremask_via_rpc 00:05:18.324 ************************************ 00:05:18.324 02:53:12 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:18.324 02:53:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59314 ]] 00:05:18.324 02:53:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59314 00:05:18.324 02:53:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59314 ']' 00:05:18.324 02:53:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59314 00:05:18.324 02:53:12 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:18.324 02:53:12 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.324 02:53:12 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59314 00:05:18.324 killing process with pid 59314 00:05:18.324 02:53:12 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.324 02:53:12 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.324 02:53:12 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59314' 00:05:18.324 02:53:12 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59314 00:05:18.325 02:53:12 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59314 00:05:20.239 02:53:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59332 ]] 00:05:20.239 02:53:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59332 00:05:20.239 02:53:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59332 ']' 00:05:20.239 02:53:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59332 00:05:20.239 02:53:14 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:20.239 02:53:14 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.239 02:53:14 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59332 00:05:20.239 killing process with pid 59332 00:05:20.239 02:53:14 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:20.239 02:53:14 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:20.239 02:53:14 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59332' 00:05:20.239 02:53:14 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59332 00:05:20.239 02:53:14 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59332 00:05:22.142 02:53:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:22.142 02:53:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:22.142 02:53:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59314 ]] 00:05:22.142 02:53:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59314 00:05:22.142 02:53:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59314 ']' 00:05:22.142 Process with pid 59314 is not found 00:05:22.142 02:53:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59314 00:05:22.142 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59314) - No such process 00:05:22.142 02:53:16 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59314 is not found' 00:05:22.142 02:53:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59332 ]] 00:05:22.142 Process with pid 59332 is not found 00:05:22.142 02:53:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59332 00:05:22.142 02:53:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59332 ']' 00:05:22.142 02:53:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59332 00:05:22.142 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59332) - No such process 00:05:22.142 02:53:16 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59332 is not found' 00:05:22.142 02:53:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:22.142 00:05:22.142 real 0m32.724s 00:05:22.142 user 1m2.575s 00:05:22.142 sys 0m4.929s 00:05:22.142 02:53:16 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.142 ************************************ 00:05:22.142 END TEST cpu_locks 00:05:22.142 ************************************ 00:05:22.142 02:53:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.142 ************************************ 00:05:22.142 END TEST event 00:05:22.142 ************************************ 00:05:22.142 00:05:22.142 real 0m58.207s 00:05:22.142 user 1m53.789s 00:05:22.142 sys 0m7.880s 00:05:22.142 02:53:16 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.142 02:53:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.142 02:53:16 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:22.142 02:53:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.142 02:53:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.142 02:53:16 -- common/autotest_common.sh@10 -- # set +x 00:05:22.142 ************************************ 00:05:22.142 START TEST thread 00:05:22.142 ************************************ 00:05:22.142 02:53:16 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:22.142 * Looking for test storage... 00:05:22.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:22.142 02:53:16 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:22.142 02:53:16 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:22.142 02:53:16 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:22.142 02:53:16 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:22.142 02:53:16 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.142 02:53:16 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.142 02:53:16 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.142 02:53:16 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.142 02:53:16 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.142 02:53:16 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.143 02:53:16 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.143 02:53:16 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.143 02:53:16 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.143 02:53:16 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.143 02:53:16 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.143 02:53:16 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:22.143 02:53:16 thread -- scripts/common.sh@345 -- # : 1 00:05:22.143 02:53:16 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.143 02:53:16 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.143 02:53:16 thread -- scripts/common.sh@365 -- # decimal 1 00:05:22.143 02:53:16 thread -- scripts/common.sh@353 -- # local d=1 00:05:22.143 02:53:16 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.143 02:53:16 thread -- scripts/common.sh@355 -- # echo 1 00:05:22.143 02:53:16 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.143 02:53:16 thread -- scripts/common.sh@366 -- # decimal 2 00:05:22.143 02:53:16 thread -- scripts/common.sh@353 -- # local d=2 00:05:22.143 02:53:16 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.143 02:53:16 thread -- scripts/common.sh@355 -- # echo 2 00:05:22.143 02:53:16 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.143 02:53:16 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.143 02:53:16 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.143 02:53:16 thread -- scripts/common.sh@368 -- # return 0 00:05:22.143 02:53:16 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.143 02:53:16 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:22.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.143 --rc genhtml_branch_coverage=1 00:05:22.143 --rc genhtml_function_coverage=1 00:05:22.143 --rc genhtml_legend=1 00:05:22.143 --rc geninfo_all_blocks=1 00:05:22.143 --rc geninfo_unexecuted_blocks=1 00:05:22.143 00:05:22.143 ' 00:05:22.143 02:53:16 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:22.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.143 --rc genhtml_branch_coverage=1 00:05:22.143 --rc genhtml_function_coverage=1 00:05:22.143 --rc genhtml_legend=1 00:05:22.143 --rc geninfo_all_blocks=1 00:05:22.143 --rc geninfo_unexecuted_blocks=1 00:05:22.143 00:05:22.143 ' 00:05:22.143 02:53:16 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:22.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.143 --rc genhtml_branch_coverage=1 00:05:22.143 --rc genhtml_function_coverage=1 00:05:22.143 --rc genhtml_legend=1 00:05:22.143 --rc geninfo_all_blocks=1 00:05:22.143 --rc geninfo_unexecuted_blocks=1 00:05:22.143 00:05:22.143 ' 00:05:22.143 02:53:16 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:22.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.143 --rc genhtml_branch_coverage=1 00:05:22.143 --rc genhtml_function_coverage=1 00:05:22.143 --rc genhtml_legend=1 00:05:22.143 --rc geninfo_all_blocks=1 00:05:22.143 --rc geninfo_unexecuted_blocks=1 00:05:22.143 00:05:22.143 ' 00:05:22.143 02:53:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:22.143 02:53:16 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:22.143 02:53:16 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.143 02:53:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.143 ************************************ 00:05:22.143 START TEST thread_poller_perf 00:05:22.143 ************************************ 00:05:22.143 02:53:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:22.143 [2024-12-10 02:53:16.445666] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:22.143 [2024-12-10 02:53:16.445776] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59516 ] 00:05:22.401 [2024-12-10 02:53:16.607940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.401 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:22.401 [2024-12-10 02:53:16.708491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.786 [2024-12-10T02:53:18.174Z] ====================================== 00:05:23.786 [2024-12-10T02:53:18.174Z] busy:2615877516 (cyc) 00:05:23.786 [2024-12-10T02:53:18.174Z] total_run_count: 304000 00:05:23.786 [2024-12-10T02:53:18.174Z] tsc_hz: 2600000000 (cyc) 00:05:23.786 [2024-12-10T02:53:18.174Z] ====================================== 00:05:23.786 [2024-12-10T02:53:18.174Z] poller_cost: 8604 (cyc), 3309 (nsec) 00:05:23.786 00:05:23.786 real 0m1.465s 00:05:23.786 user 0m1.298s 00:05:23.786 sys 0m0.058s 00:05:23.786 ************************************ 00:05:23.786 END TEST thread_poller_perf 00:05:23.786 ************************************ 00:05:23.786 02:53:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.786 02:53:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.786 02:53:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:23.786 02:53:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:23.786 02:53:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.786 02:53:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.786 ************************************ 00:05:23.786 START TEST thread_poller_perf 00:05:23.786 ************************************ 00:05:23.786 02:53:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:23.786 [2024-12-10 02:53:17.992047] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:23.786 [2024-12-10 02:53:17.992468] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59553 ] 00:05:24.047 [2024-12-10 02:53:18.167879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.047 [2024-12-10 02:53:18.301800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.047 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:25.433 [2024-12-10T02:53:19.821Z] ====================================== 00:05:25.433 [2024-12-10T02:53:19.821Z] busy:2603875344 (cyc) 00:05:25.433 [2024-12-10T02:53:19.821Z] total_run_count: 3636000 00:05:25.433 [2024-12-10T02:53:19.821Z] tsc_hz: 2600000000 (cyc) 00:05:25.433 [2024-12-10T02:53:19.821Z] ====================================== 00:05:25.433 [2024-12-10T02:53:19.821Z] poller_cost: 716 (cyc), 275 (nsec) 00:05:25.433 ************************************ 00:05:25.433 END TEST thread_poller_perf 00:05:25.433 ************************************ 00:05:25.433 00:05:25.433 real 0m1.532s 00:05:25.433 user 0m1.328s 00:05:25.433 sys 0m0.093s 00:05:25.433 02:53:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.433 02:53:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.433 02:53:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:25.433 00:05:25.433 real 0m3.303s 00:05:25.433 user 0m2.750s 00:05:25.433 sys 0m0.282s 00:05:25.433 02:53:19 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.433 02:53:19 thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.433 ************************************ 00:05:25.433 END TEST thread 00:05:25.433 ************************************ 00:05:25.433 02:53:19 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:25.433 02:53:19 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:25.433 02:53:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.433 02:53:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.433 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:25.433 ************************************ 00:05:25.433 START TEST app_cmdline 00:05:25.433 ************************************ 00:05:25.433 02:53:19 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:25.433 * Looking for test storage... 00:05:25.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:25.433 02:53:19 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:25.433 02:53:19 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:25.433 02:53:19 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:25.433 02:53:19 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:25.433 02:53:19 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:25.434 02:53:19 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.434 02:53:19 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:25.434 02:53:19 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.434 02:53:19 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:25.434 02:53:19 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:25.434 02:53:19 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.434 02:53:19 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:25.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.434 02:53:19 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.434 02:53:19 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.434 02:53:19 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.434 02:53:19 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:25.434 02:53:19 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.434 02:53:19 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:25.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.434 --rc genhtml_branch_coverage=1 00:05:25.434 --rc genhtml_function_coverage=1 00:05:25.434 --rc genhtml_legend=1 00:05:25.434 --rc geninfo_all_blocks=1 00:05:25.434 --rc geninfo_unexecuted_blocks=1 00:05:25.434 00:05:25.434 ' 00:05:25.434 02:53:19 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:25.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.434 --rc genhtml_branch_coverage=1 00:05:25.434 --rc genhtml_function_coverage=1 00:05:25.434 --rc genhtml_legend=1 00:05:25.434 --rc geninfo_all_blocks=1 00:05:25.434 --rc geninfo_unexecuted_blocks=1 00:05:25.434 00:05:25.434 ' 00:05:25.434 02:53:19 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:25.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.434 --rc genhtml_branch_coverage=1 00:05:25.434 --rc genhtml_function_coverage=1 00:05:25.434 --rc genhtml_legend=1 00:05:25.434 --rc geninfo_all_blocks=1 00:05:25.434 --rc geninfo_unexecuted_blocks=1 00:05:25.434 00:05:25.434 ' 00:05:25.434 02:53:19 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:25.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.434 --rc genhtml_branch_coverage=1 00:05:25.434 --rc genhtml_function_coverage=1 00:05:25.434 --rc genhtml_legend=1 00:05:25.434 --rc geninfo_all_blocks=1 00:05:25.434 --rc geninfo_unexecuted_blocks=1 00:05:25.434 00:05:25.434 ' 00:05:25.434 02:53:19 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:25.434 02:53:19 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59642 00:05:25.434 02:53:19 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59642 00:05:25.434 02:53:19 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59642 ']' 00:05:25.434 02:53:19 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.434 02:53:19 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:25.434 02:53:19 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.434 02:53:19 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.434 02:53:19 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.434 02:53:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:25.770 [2024-12-10 02:53:19.872518] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:25.770 [2024-12-10 02:53:19.872676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59642 ] 00:05:25.770 [2024-12-10 02:53:20.039092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.031 [2024-12-10 02:53:20.179515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.601 02:53:20 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.601 02:53:20 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:26.601 02:53:20 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:26.864 { 00:05:26.864 "version": "SPDK v25.01-pre git sha1 86d35c37a", 00:05:26.864 "fields": { 00:05:26.864 "major": 25, 00:05:26.864 "minor": 1, 00:05:26.864 "patch": 0, 00:05:26.864 "suffix": "-pre", 00:05:26.864 "commit": "86d35c37a" 00:05:26.864 } 00:05:26.864 } 00:05:26.864 02:53:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:26.864 02:53:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:26.864 02:53:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:26.864 02:53:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:26.864 02:53:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:26.864 02:53:21 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.864 02:53:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:26.864 02:53:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:26.864 02:53:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:26.864 02:53:21 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.864 02:53:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:26.864 02:53:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:26.864 02:53:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:26.864 02:53:21 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:26.864 02:53:21 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:26.864 02:53:21 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:26.864 02:53:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.864 02:53:21 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:26.864 02:53:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.864 02:53:21 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:26.864 02:53:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.864 02:53:21 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:26.864 02:53:21 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:26.864 02:53:21 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:27.125 request: 00:05:27.125 { 00:05:27.125 "method": "env_dpdk_get_mem_stats", 00:05:27.125 "req_id": 1 00:05:27.125 } 00:05:27.125 Got JSON-RPC error response 00:05:27.125 response: 00:05:27.125 { 00:05:27.125 "code": -32601, 00:05:27.125 "message": "Method not found" 00:05:27.125 } 00:05:27.125 02:53:21 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:27.125 02:53:21 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:27.125 02:53:21 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:27.125 02:53:21 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:27.125 02:53:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59642 00:05:27.125 02:53:21 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59642 ']' 00:05:27.125 02:53:21 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59642 00:05:27.125 02:53:21 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:27.125 02:53:21 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.125 02:53:21 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59642 00:05:27.125 killing process with pid 59642 00:05:27.125 02:53:21 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.125 02:53:21 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.125 02:53:21 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59642' 00:05:27.125 02:53:21 app_cmdline -- common/autotest_common.sh@973 -- # kill 59642 00:05:27.125 02:53:21 app_cmdline -- common/autotest_common.sh@978 -- # wait 59642 00:05:29.104 ************************************ 00:05:29.104 END TEST app_cmdline 00:05:29.105 ************************************ 00:05:29.105 00:05:29.105 real 0m3.527s 00:05:29.105 user 0m3.706s 00:05:29.105 sys 0m0.608s 00:05:29.105 02:53:23 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.105 02:53:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:29.105 02:53:23 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:29.105 02:53:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.105 02:53:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.105 02:53:23 -- common/autotest_common.sh@10 -- # set +x 00:05:29.105 ************************************ 00:05:29.105 START TEST version 00:05:29.105 ************************************ 00:05:29.105 02:53:23 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:29.105 * Looking for test storage... 00:05:29.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:29.105 02:53:23 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.105 02:53:23 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.105 02:53:23 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.105 02:53:23 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.105 02:53:23 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.105 02:53:23 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.105 02:53:23 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.105 02:53:23 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.105 02:53:23 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.105 02:53:23 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.105 02:53:23 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.105 02:53:23 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.105 02:53:23 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.105 02:53:23 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.105 02:53:23 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.105 02:53:23 version -- scripts/common.sh@344 -- # case "$op" in 00:05:29.105 02:53:23 version -- scripts/common.sh@345 -- # : 1 00:05:29.105 02:53:23 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.105 02:53:23 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.105 02:53:23 version -- scripts/common.sh@365 -- # decimal 1 00:05:29.105 02:53:23 version -- scripts/common.sh@353 -- # local d=1 00:05:29.105 02:53:23 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.105 02:53:23 version -- scripts/common.sh@355 -- # echo 1 00:05:29.105 02:53:23 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.105 02:53:23 version -- scripts/common.sh@366 -- # decimal 2 00:05:29.105 02:53:23 version -- scripts/common.sh@353 -- # local d=2 00:05:29.105 02:53:23 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.105 02:53:23 version -- scripts/common.sh@355 -- # echo 2 00:05:29.105 02:53:23 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.105 02:53:23 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.105 02:53:23 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.105 02:53:23 version -- scripts/common.sh@368 -- # return 0 00:05:29.105 02:53:23 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.105 02:53:23 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.105 --rc genhtml_branch_coverage=1 00:05:29.105 --rc genhtml_function_coverage=1 00:05:29.105 --rc genhtml_legend=1 00:05:29.105 --rc geninfo_all_blocks=1 00:05:29.105 --rc geninfo_unexecuted_blocks=1 00:05:29.105 00:05:29.105 ' 00:05:29.105 02:53:23 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.105 --rc genhtml_branch_coverage=1 00:05:29.105 --rc genhtml_function_coverage=1 00:05:29.105 --rc genhtml_legend=1 00:05:29.105 --rc geninfo_all_blocks=1 00:05:29.105 --rc geninfo_unexecuted_blocks=1 00:05:29.105 00:05:29.105 ' 00:05:29.105 02:53:23 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.105 --rc genhtml_branch_coverage=1 00:05:29.105 --rc genhtml_function_coverage=1 00:05:29.105 --rc genhtml_legend=1 00:05:29.105 --rc geninfo_all_blocks=1 00:05:29.105 --rc geninfo_unexecuted_blocks=1 00:05:29.105 00:05:29.105 ' 00:05:29.105 02:53:23 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.105 --rc genhtml_branch_coverage=1 00:05:29.105 --rc genhtml_function_coverage=1 00:05:29.105 --rc genhtml_legend=1 00:05:29.105 --rc geninfo_all_blocks=1 00:05:29.105 --rc geninfo_unexecuted_blocks=1 00:05:29.105 00:05:29.105 ' 00:05:29.105 02:53:23 version -- app/version.sh@17 -- # get_header_version major 00:05:29.105 02:53:23 version -- app/version.sh@14 -- # cut -f2 00:05:29.105 02:53:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:29.105 02:53:23 version -- app/version.sh@14 -- # tr -d '"' 00:05:29.105 02:53:23 version -- app/version.sh@17 -- # major=25 00:05:29.105 02:53:23 version -- app/version.sh@18 -- # get_header_version minor 00:05:29.105 02:53:23 version -- app/version.sh@14 -- # cut -f2 00:05:29.105 02:53:23 version -- app/version.sh@14 -- # tr -d '"' 00:05:29.105 02:53:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:29.105 02:53:23 version -- app/version.sh@18 -- # minor=1 00:05:29.105 02:53:23 version -- app/version.sh@19 -- # get_header_version patch 00:05:29.105 02:53:23 version -- app/version.sh@14 -- # cut -f2 00:05:29.105 02:53:23 version -- app/version.sh@14 -- # tr -d '"' 00:05:29.105 02:53:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:29.105 02:53:23 version -- app/version.sh@19 -- # patch=0 00:05:29.105 02:53:23 version -- app/version.sh@20 -- # get_header_version suffix 00:05:29.105 02:53:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:29.105 02:53:23 version -- app/version.sh@14 -- # cut -f2 00:05:29.105 02:53:23 version -- app/version.sh@14 -- # tr -d '"' 00:05:29.105 02:53:23 version -- app/version.sh@20 -- # suffix=-pre 00:05:29.105 02:53:23 version -- app/version.sh@22 -- # version=25.1 00:05:29.105 02:53:23 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:29.105 02:53:23 version -- app/version.sh@28 -- # version=25.1rc0 00:05:29.105 02:53:23 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:29.105 02:53:23 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:29.105 02:53:23 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:29.105 02:53:23 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:29.105 00:05:29.105 real 0m0.186s 00:05:29.105 user 0m0.132s 00:05:29.105 sys 0m0.079s 00:05:29.105 ************************************ 00:05:29.105 END TEST version 00:05:29.105 ************************************ 00:05:29.105 02:53:23 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.105 02:53:23 version -- common/autotest_common.sh@10 -- # set +x 00:05:29.105 02:53:23 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:29.105 02:53:23 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:29.105 02:53:23 -- spdk/autotest.sh@194 -- # uname -s 00:05:29.105 02:53:23 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:29.105 02:53:23 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:29.105 02:53:23 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:29.105 02:53:23 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:29.105 02:53:23 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:29.105 02:53:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:29.105 02:53:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.105 02:53:23 -- common/autotest_common.sh@10 -- # set +x 00:05:29.105 ************************************ 00:05:29.105 START TEST blockdev_nvme 00:05:29.105 ************************************ 00:05:29.105 02:53:23 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:29.105 * Looking for test storage... 00:05:29.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:29.105 02:53:23 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.105 02:53:23 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.105 02:53:23 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.366 02:53:23 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.366 02:53:23 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:29.366 02:53:23 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.366 02:53:23 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.366 --rc genhtml_branch_coverage=1 00:05:29.366 --rc genhtml_function_coverage=1 00:05:29.366 --rc genhtml_legend=1 00:05:29.367 --rc geninfo_all_blocks=1 00:05:29.367 --rc geninfo_unexecuted_blocks=1 00:05:29.367 00:05:29.367 ' 00:05:29.367 02:53:23 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.367 --rc genhtml_branch_coverage=1 00:05:29.367 --rc genhtml_function_coverage=1 00:05:29.367 --rc genhtml_legend=1 00:05:29.367 --rc geninfo_all_blocks=1 00:05:29.367 --rc geninfo_unexecuted_blocks=1 00:05:29.367 00:05:29.367 ' 00:05:29.367 02:53:23 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.367 --rc genhtml_branch_coverage=1 00:05:29.367 --rc genhtml_function_coverage=1 00:05:29.367 --rc genhtml_legend=1 00:05:29.367 --rc geninfo_all_blocks=1 00:05:29.367 --rc geninfo_unexecuted_blocks=1 00:05:29.367 00:05:29.367 ' 00:05:29.367 02:53:23 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.367 --rc genhtml_branch_coverage=1 00:05:29.367 --rc genhtml_function_coverage=1 00:05:29.367 --rc genhtml_legend=1 00:05:29.367 --rc geninfo_all_blocks=1 00:05:29.367 --rc geninfo_unexecuted_blocks=1 00:05:29.367 00:05:29.367 ' 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:29.367 02:53:23 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59819 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59819 00:05:29.367 02:53:23 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59819 ']' 00:05:29.367 02:53:23 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.367 02:53:23 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:29.367 02:53:23 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.367 02:53:23 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.367 02:53:23 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.367 02:53:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:29.367 [2024-12-10 02:53:23.627478] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:29.367 [2024-12-10 02:53:23.627598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59819 ] 00:05:29.628 [2024-12-10 02:53:23.785252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.628 [2024-12-10 02:53:23.886626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.199 02:53:24 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.199 02:53:24 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:30.199 02:53:24 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:05:30.199 02:53:24 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:05:30.199 02:53:24 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:30.199 02:53:24 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:30.199 02:53:24 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:30.199 02:53:24 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:30.199 02:53:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.199 02:53:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:30.459 02:53:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.459 02:53:24 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:05:30.460 02:53:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.460 02:53:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:30.460 02:53:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.460 02:53:24 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:05:30.460 02:53:24 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:05:30.460 02:53:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.460 02:53:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:30.460 02:53:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.460 02:53:24 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:05:30.460 02:53:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.460 02:53:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:30.720 02:53:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.720 02:53:24 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:30.720 02:53:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.720 02:53:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:30.720 02:53:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.720 02:53:24 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:05:30.720 02:53:24 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:05:30.720 02:53:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.720 02:53:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:30.720 02:53:24 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:05:30.720 02:53:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.720 02:53:24 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:05:30.721 02:53:24 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "dbaa1f9e-4679-4684-abd1-45f1a5285fd4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "dbaa1f9e-4679-4684-abd1-45f1a5285fd4",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "2ddbe8ef-5e0f-43cc-8ad4-3a1684ad826a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2ddbe8ef-5e0f-43cc-8ad4-3a1684ad826a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ac8bd32e-1aac-4bb7-a072-f0f764bfabf1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ac8bd32e-1aac-4bb7-a072-f0f764bfabf1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "308f2d77-98d2-4e9e-8304-9220b18ff5a2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "308f2d77-98d2-4e9e-8304-9220b18ff5a2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a00668d8-db2f-4aa2-a6f9-461513afd253"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a00668d8-db2f-4aa2-a6f9-461513afd253",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "115c434e-858c-4683-b50e-dcf13939b350"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "115c434e-858c-4683-b50e-dcf13939b350",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:30.721 02:53:24 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:05:30.721 02:53:24 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:05:30.721 02:53:24 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:05:30.721 02:53:24 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:05:30.721 02:53:24 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59819 00:05:30.721 02:53:24 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59819 ']' 00:05:30.721 02:53:24 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59819 00:05:30.721 02:53:24 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:30.721 02:53:24 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.721 02:53:24 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59819 00:05:30.721 02:53:24 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.721 02:53:24 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.721 killing process with pid 59819 00:05:30.721 02:53:24 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59819' 00:05:30.721 02:53:24 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59819 00:05:30.721 02:53:24 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59819 00:05:32.627 02:53:26 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:32.627 02:53:26 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:32.627 02:53:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:32.627 02:53:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.627 02:53:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:32.627 ************************************ 00:05:32.627 START TEST bdev_hello_world 00:05:32.627 ************************************ 00:05:32.627 02:53:26 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:32.627 [2024-12-10 02:53:26.557680] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:32.627 [2024-12-10 02:53:26.557788] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59898 ] 00:05:32.627 [2024-12-10 02:53:26.714592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.627 [2024-12-10 02:53:26.815137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.197 [2024-12-10 02:53:27.366579] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:33.197 [2024-12-10 02:53:27.366633] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:33.197 [2024-12-10 02:53:27.366656] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:33.197 [2024-12-10 02:53:27.369110] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:33.197 [2024-12-10 02:53:27.369480] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:33.197 [2024-12-10 02:53:27.369504] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:33.197 [2024-12-10 02:53:27.369621] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:33.197 00:05:33.197 [2024-12-10 02:53:27.369639] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:33.770 00:05:33.770 real 0m1.630s 00:05:33.770 user 0m1.358s 00:05:33.770 sys 0m0.164s 00:05:33.770 ************************************ 00:05:33.770 END TEST bdev_hello_world 00:05:33.770 ************************************ 00:05:33.770 02:53:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.770 02:53:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:34.089 02:53:28 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:05:34.089 02:53:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:34.089 02:53:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.089 02:53:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:34.089 ************************************ 00:05:34.089 START TEST bdev_bounds 00:05:34.089 ************************************ 00:05:34.089 02:53:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:05:34.089 02:53:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59940 00:05:34.089 02:53:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.089 Process bdevio pid: 59940 00:05:34.089 02:53:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59940' 00:05:34.089 02:53:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59940 00:05:34.089 02:53:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59940 ']' 00:05:34.089 02:53:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.089 02:53:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:34.089 02:53:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.089 02:53:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.089 02:53:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.089 02:53:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:34.089 [2024-12-10 02:53:28.231469] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:34.089 [2024-12-10 02:53:28.231598] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59940 ] 00:05:34.089 [2024-12-10 02:53:28.391711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.349 [2024-12-10 02:53:28.493921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.349 [2024-12-10 02:53:28.494175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.349 [2024-12-10 02:53:28.494311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.919 02:53:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.919 02:53:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:05:34.919 02:53:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:34.919 I/O targets: 00:05:34.919 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:34.919 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:34.919 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:34.919 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:34.919 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:34.919 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:34.919 00:05:34.919 00:05:34.919 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.919 http://cunit.sourceforge.net/ 00:05:34.919 00:05:34.919 00:05:34.919 Suite: bdevio tests on: Nvme3n1 00:05:34.919 Test: blockdev write read block ...passed 00:05:34.919 Test: blockdev write zeroes read block ...passed 00:05:34.919 Test: blockdev write zeroes read no split ...passed 00:05:34.919 Test: blockdev write zeroes read split ...passed 00:05:34.919 Test: blockdev write zeroes read split partial ...passed 00:05:34.919 Test: blockdev reset ...[2024-12-10 02:53:29.247183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:34.919 [2024-12-10 02:53:29.250368] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:34.919 passed 00:05:34.919 Test: blockdev write read 8 blocks ...passed 00:05:34.919 Test: blockdev write read size > 128k ...passed 00:05:34.919 Test: blockdev write read invalid size ...passed 00:05:34.919 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:34.919 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:34.919 Test: blockdev write read max offset ...passed 00:05:34.919 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:34.919 Test: blockdev writev readv 8 blocks ...passed 00:05:34.919 Test: blockdev writev readv 30 x 1block ...passed 00:05:34.919 Test: blockdev writev readv block ...passed 00:05:34.919 Test: blockdev writev readv size > 128k ...passed 00:05:34.919 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:34.919 Test: blockdev comparev and writev ...[2024-12-10 02:53:29.256636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b160a000 len:0x1000 00:05:34.919 [2024-12-10 02:53:29.256683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:34.919 passed 00:05:34.919 Test: blockdev nvme passthru rw ...passed 00:05:34.919 Test: blockdev nvme passthru vendor specific ...passed 00:05:34.919 Test: blockdev nvme admin passthru ...[2024-12-10 02:53:29.257114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:34.919 [2024-12-10 02:53:29.257140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:34.919 passed 00:05:34.919 Test: blockdev copy ...passed 00:05:34.919 Suite: bdevio tests on: Nvme2n3 00:05:34.919 Test: blockdev write read block ...passed 00:05:34.919 Test: blockdev write zeroes read block ...passed 00:05:34.919 Test: blockdev write zeroes read no split ...passed 00:05:34.919 Test: blockdev write zeroes read split ...passed 00:05:34.919 Test: blockdev write zeroes read split partial ...passed 00:05:34.919 Test: blockdev reset ...[2024-12-10 02:53:29.300040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:35.178 [2024-12-10 02:53:29.302993] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:35.178 passed 00:05:35.178 Test: blockdev write read 8 blocks ...passed 00:05:35.178 Test: blockdev write read size > 128k ...passed 00:05:35.178 Test: blockdev write read invalid size ...passed 00:05:35.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.178 Test: blockdev write read max offset ...passed 00:05:35.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.178 Test: blockdev writev readv 8 blocks ...passed 00:05:35.178 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.178 Test: blockdev writev readv block ...passed 00:05:35.178 Test: blockdev writev readv size > 128k ...passed 00:05:35.178 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.178 Test: blockdev comparev and writev ...[2024-12-10 02:53:29.308465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5a06000 len:0x1000 00:05:35.178 [2024-12-10 02:53:29.308512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:35.178 passed 00:05:35.178 Test: blockdev nvme passthru rw ...passed 00:05:35.178 Test: blockdev nvme passthru vendor specific ...passed 00:05:35.178 Test: blockdev nvme admin passthru ...[2024-12-10 02:53:29.309041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:35.178 [2024-12-10 02:53:29.309066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:35.178 passed 00:05:35.178 Test: blockdev copy ...passed 00:05:35.178 Suite: bdevio tests on: Nvme2n2 00:05:35.178 Test: blockdev write read block ...passed 00:05:35.178 Test: blockdev write zeroes read block ...passed 00:05:35.178 Test: blockdev write zeroes read no split ...passed 00:05:35.178 Test: blockdev write zeroes read split ...passed 00:05:35.178 Test: blockdev write zeroes read split partial ...passed 00:05:35.178 Test: blockdev reset ...[2024-12-10 02:53:29.358620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:35.178 [2024-12-10 02:53:29.361635] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:35.178 passed 00:05:35.178 Test: blockdev write read 8 blocks ...passed 00:05:35.178 Test: blockdev write read size > 128k ...passed 00:05:35.178 Test: blockdev write read invalid size ...passed 00:05:35.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.178 Test: blockdev write read max offset ...passed 00:05:35.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.178 Test: blockdev writev readv 8 blocks ...passed 00:05:35.178 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.178 Test: blockdev writev readv block ...passed 00:05:35.178 Test: blockdev writev readv size > 128k ...passed 00:05:35.178 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.178 Test: blockdev comparev and writev ...[2024-12-10 02:53:29.366966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c823c000 len:0x1000 00:05:35.178 [2024-12-10 02:53:29.367011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:35.178 passed 00:05:35.178 Test: blockdev nvme passthru rw ...passed 00:05:35.178 Test: blockdev nvme passthru vendor specific ...passed 00:05:35.178 Test: blockdev nvme admin passthru ...[2024-12-10 02:53:29.367427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:35.178 [2024-12-10 02:53:29.367454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:35.178 passed 00:05:35.178 Test: blockdev copy ...passed 00:05:35.178 Suite: bdevio tests on: Nvme2n1 00:05:35.178 Test: blockdev write read block ...passed 00:05:35.178 Test: blockdev write zeroes read block ...passed 00:05:35.178 Test: blockdev write zeroes read no split ...passed 00:05:35.178 Test: blockdev write zeroes read split ...passed 00:05:35.178 Test: blockdev write zeroes read split partial ...passed 00:05:35.178 Test: blockdev reset ...[2024-12-10 02:53:29.411252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:35.178 [2024-12-10 02:53:29.414362] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:35.178 passed 00:05:35.178 Test: blockdev write read 8 blocks ...passed 00:05:35.178 Test: blockdev write read size > 128k ...passed 00:05:35.178 Test: blockdev write read invalid size ...passed 00:05:35.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.178 Test: blockdev write read max offset ...passed 00:05:35.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.178 Test: blockdev writev readv 8 blocks ...passed 00:05:35.178 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.178 Test: blockdev writev readv block ...passed 00:05:35.178 Test: blockdev writev readv size > 128k ...passed 00:05:35.178 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.178 Test: blockdev comparev and writev ...[2024-12-10 02:53:29.419525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c8238000 len:0x1000 00:05:35.178 [2024-12-10 02:53:29.419573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:35.178 passed 00:05:35.178 Test: blockdev nvme passthru rw ...passed 00:05:35.178 Test: blockdev nvme passthru vendor specific ...[2024-12-10 02:53:29.419962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:35.178 [2024-12-10 02:53:29.419986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:35.178 passed 00:05:35.178 Test: blockdev nvme admin passthru ...passed 00:05:35.178 Test: blockdev copy ...passed 00:05:35.178 Suite: bdevio tests on: Nvme1n1 00:05:35.178 Test: blockdev write read block ...passed 00:05:35.178 Test: blockdev write zeroes read block ...passed 00:05:35.178 Test: blockdev write zeroes read no split ...passed 00:05:35.178 Test: blockdev write zeroes read split ...passed 00:05:35.178 Test: blockdev write zeroes read split partial ...passed 00:05:35.178 Test: blockdev reset ...[2024-12-10 02:53:29.462192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:35.178 [2024-12-10 02:53:29.465003] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:05:35.178 passed 00:05:35.178 Test: blockdev write read 8 blocks ...passed 00:05:35.178 Test: blockdev write read size > 128k ...passed 00:05:35.178 Test: blockdev write read invalid size ...passed 00:05:35.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.178 Test: blockdev write read max offset ...passed 00:05:35.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.178 Test: blockdev writev readv 8 blocks ...passed 00:05:35.178 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.178 Test: blockdev writev readv block ...passed 00:05:35.178 Test: blockdev writev readv size > 128k ...passed 00:05:35.178 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.179 Test: blockdev comparev and writev ...[2024-12-10 02:53:29.470415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c8234000 len:0x1000 00:05:35.179 [2024-12-10 02:53:29.470458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:35.179 passed 00:05:35.179 Test: blockdev nvme passthru rw ...passed 00:05:35.179 Test: blockdev nvme passthru vendor specific ...passed 00:05:35.179 Test: blockdev nvme admin passthru ...[2024-12-10 02:53:29.470885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:35.179 [2024-12-10 02:53:29.470908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:35.179 passed 00:05:35.179 Test: blockdev copy ...passed 00:05:35.179 Suite: bdevio tests on: Nvme0n1 00:05:35.179 Test: blockdev write read block ...passed 00:05:35.179 Test: blockdev write zeroes read block ...passed 00:05:35.179 Test: blockdev write zeroes read no split ...passed 00:05:35.179 Test: blockdev write zeroes read split ...passed 00:05:35.179 Test: blockdev write zeroes read split partial ...passed 00:05:35.179 Test: blockdev reset ...[2024-12-10 02:53:29.512875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:35.179 [2024-12-10 02:53:29.515470] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:05:35.179 passed 00:05:35.179 Test: blockdev write read 8 blocks ...passed 00:05:35.179 Test: blockdev write read size > 128k ...passed 00:05:35.179 Test: blockdev write read invalid size ...passed 00:05:35.179 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.179 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.179 Test: blockdev write read max offset ...passed 00:05:35.179 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.179 Test: blockdev writev readv 8 blocks ...passed 00:05:35.179 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.179 Test: blockdev writev readv block ...passed 00:05:35.179 Test: blockdev writev readv size > 128k ...passed 00:05:35.179 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.179 Test: blockdev comparev and writev ...[2024-12-10 02:53:29.519659] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:35.179 separate metadata which is not supported yet. 00:05:35.179 passed 00:05:35.179 Test: blockdev nvme passthru rw ...passed 00:05:35.179 Test: blockdev nvme passthru vendor specific ...[2024-12-10 02:53:29.519958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:35.179 passed 00:05:35.179 Test: blockdev nvme admin passthru ...[2024-12-10 02:53:29.520002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:35.179 passed 00:05:35.179 Test: blockdev copy ...passed 00:05:35.179 00:05:35.179 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.179 suites 6 6 n/a 0 0 00:05:35.179 tests 138 138 138 0 0 00:05:35.179 asserts 893 893 893 0 n/a 00:05:35.179 00:05:35.179 Elapsed time = 0.937 seconds 00:05:35.179 0 00:05:35.179 02:53:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59940 00:05:35.179 02:53:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59940 ']' 00:05:35.179 02:53:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59940 00:05:35.179 02:53:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:05:35.179 02:53:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.179 02:53:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59940 00:05:35.436 02:53:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.436 killing process with pid 59940 00:05:35.436 02:53:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.436 02:53:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59940' 00:05:35.436 02:53:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59940 00:05:35.436 02:53:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59940 00:05:36.002 02:53:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:36.002 00:05:36.002 real 0m2.073s 00:05:36.002 user 0m5.350s 00:05:36.002 sys 0m0.275s 00:05:36.002 02:53:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.002 02:53:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:36.002 ************************************ 00:05:36.002 END TEST bdev_bounds 00:05:36.002 ************************************ 00:05:36.002 02:53:30 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:36.002 02:53:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:36.002 02:53:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.002 02:53:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:36.002 ************************************ 00:05:36.002 START TEST bdev_nbd 00:05:36.002 ************************************ 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59994 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59994 /var/tmp/spdk-nbd.sock 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59994 ']' 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.002 02:53:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:36.002 [2024-12-10 02:53:30.357683] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:36.002 [2024-12-10 02:53:30.357831] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:36.260 [2024-12-10 02:53:30.526714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.260 [2024-12-10 02:53:30.626075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.825 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.825 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:05:36.825 02:53:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:36.825 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.825 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:36.825 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:36.825 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:36.825 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.825 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:36.825 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:36.825 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:36.825 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:36.825 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:36.825 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:36.825 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:37.082 1+0 records in 00:05:37.082 1+0 records out 00:05:37.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263209 s, 15.6 MB/s 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:37.082 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:37.083 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:37.340 1+0 records in 00:05:37.340 1+0 records out 00:05:37.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337094 s, 12.2 MB/s 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:37.340 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:37.600 1+0 records in 00:05:37.600 1+0 records out 00:05:37.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473024 s, 8.7 MB/s 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:37.600 02:53:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:37.860 1+0 records in 00:05:37.860 1+0 records out 00:05:37.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353179 s, 11.6 MB/s 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:37.860 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:38.120 1+0 records in 00:05:38.120 1+0 records out 00:05:38.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376308 s, 10.9 MB/s 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:38.120 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:38.380 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:38.380 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:38.380 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:38.380 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:05:38.380 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:38.380 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.380 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.380 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:05:38.380 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:38.380 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.380 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.381 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:38.381 1+0 records in 00:05:38.381 1+0 records out 00:05:38.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411109 s, 10.0 MB/s 00:05:38.381 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:38.381 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:38.381 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:38.381 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.381 02:53:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:38.381 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:38.381 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:38.381 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.642 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:38.642 { 00:05:38.642 "nbd_device": "/dev/nbd0", 00:05:38.642 "bdev_name": "Nvme0n1" 00:05:38.642 }, 00:05:38.642 { 00:05:38.642 "nbd_device": "/dev/nbd1", 00:05:38.642 "bdev_name": "Nvme1n1" 00:05:38.642 }, 00:05:38.642 { 00:05:38.642 "nbd_device": "/dev/nbd2", 00:05:38.642 "bdev_name": "Nvme2n1" 00:05:38.642 }, 00:05:38.642 { 00:05:38.642 "nbd_device": "/dev/nbd3", 00:05:38.642 "bdev_name": "Nvme2n2" 00:05:38.642 }, 00:05:38.642 { 00:05:38.642 "nbd_device": "/dev/nbd4", 00:05:38.642 "bdev_name": "Nvme2n3" 00:05:38.642 }, 00:05:38.642 { 00:05:38.642 "nbd_device": "/dev/nbd5", 00:05:38.642 "bdev_name": "Nvme3n1" 00:05:38.643 } 00:05:38.643 ]' 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:38.643 { 00:05:38.643 "nbd_device": "/dev/nbd0", 00:05:38.643 "bdev_name": "Nvme0n1" 00:05:38.643 }, 00:05:38.643 { 00:05:38.643 "nbd_device": "/dev/nbd1", 00:05:38.643 "bdev_name": "Nvme1n1" 00:05:38.643 }, 00:05:38.643 { 00:05:38.643 "nbd_device": "/dev/nbd2", 00:05:38.643 "bdev_name": "Nvme2n1" 00:05:38.643 }, 00:05:38.643 { 00:05:38.643 "nbd_device": "/dev/nbd3", 00:05:38.643 "bdev_name": "Nvme2n2" 00:05:38.643 }, 00:05:38.643 { 00:05:38.643 "nbd_device": "/dev/nbd4", 00:05:38.643 "bdev_name": "Nvme2n3" 00:05:38.643 }, 00:05:38.643 { 00:05:38.643 "nbd_device": "/dev/nbd5", 00:05:38.643 "bdev_name": "Nvme3n1" 00:05:38.643 } 00:05:38.643 ]' 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.643 02:53:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.643 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:38.643 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.643 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.643 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.904 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.904 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.904 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.904 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.904 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.904 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.904 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:38.904 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.904 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.904 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:39.165 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:39.165 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:39.165 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:39.165 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.165 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.165 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:39.165 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:39.165 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.165 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.165 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:39.427 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:39.427 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:39.428 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:39.428 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.428 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.428 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:39.428 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:39.428 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.428 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.428 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:39.689 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:39.689 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:39.689 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:39.689 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.689 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.689 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:39.689 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:39.689 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.689 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.689 02:53:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:39.689 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:39.689 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:39.689 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:39.689 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.689 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.689 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:39.689 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:39.689 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.689 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.689 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.689 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:39.950 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:40.211 /dev/nbd0 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:40.211 1+0 records in 00:05:40.211 1+0 records out 00:05:40.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255833 s, 16.0 MB/s 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:40.211 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:40.485 /dev/nbd1 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:40.485 1+0 records in 00:05:40.485 1+0 records out 00:05:40.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275912 s, 14.8 MB/s 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:40.485 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:40.747 /dev/nbd10 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:40.747 1+0 records in 00:05:40.747 1+0 records out 00:05:40.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415971 s, 9.8 MB/s 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:40.747 02:53:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:41.009 /dev/nbd11 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:41.009 1+0 records in 00:05:41.009 1+0 records out 00:05:41.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043645 s, 9.4 MB/s 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:41.009 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:41.270 /dev/nbd12 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:41.270 1+0 records in 00:05:41.270 1+0 records out 00:05:41.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419024 s, 9.8 MB/s 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:41.270 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:05:41.270 /dev/nbd13 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:41.532 1+0 records in 00:05:41.532 1+0 records out 00:05:41.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441769 s, 9.3 MB/s 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.532 { 00:05:41.532 "nbd_device": "/dev/nbd0", 00:05:41.532 "bdev_name": "Nvme0n1" 00:05:41.532 }, 00:05:41.532 { 00:05:41.532 "nbd_device": "/dev/nbd1", 00:05:41.532 "bdev_name": "Nvme1n1" 00:05:41.532 }, 00:05:41.532 { 00:05:41.532 "nbd_device": "/dev/nbd10", 00:05:41.532 "bdev_name": "Nvme2n1" 00:05:41.532 }, 00:05:41.532 { 00:05:41.532 "nbd_device": "/dev/nbd11", 00:05:41.532 "bdev_name": "Nvme2n2" 00:05:41.532 }, 00:05:41.532 { 00:05:41.532 "nbd_device": "/dev/nbd12", 00:05:41.532 "bdev_name": "Nvme2n3" 00:05:41.532 }, 00:05:41.532 { 00:05:41.532 "nbd_device": "/dev/nbd13", 00:05:41.532 "bdev_name": "Nvme3n1" 00:05:41.532 } 00:05:41.532 ]' 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.532 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.532 { 00:05:41.532 "nbd_device": "/dev/nbd0", 00:05:41.532 "bdev_name": "Nvme0n1" 00:05:41.532 }, 00:05:41.532 { 00:05:41.532 "nbd_device": "/dev/nbd1", 00:05:41.532 "bdev_name": "Nvme1n1" 00:05:41.532 }, 00:05:41.532 { 00:05:41.532 "nbd_device": "/dev/nbd10", 00:05:41.532 "bdev_name": "Nvme2n1" 00:05:41.532 }, 00:05:41.532 { 00:05:41.532 "nbd_device": "/dev/nbd11", 00:05:41.532 "bdev_name": "Nvme2n2" 00:05:41.532 }, 00:05:41.532 { 00:05:41.532 "nbd_device": "/dev/nbd12", 00:05:41.532 "bdev_name": "Nvme2n3" 00:05:41.532 }, 00:05:41.532 { 00:05:41.532 "nbd_device": "/dev/nbd13", 00:05:41.532 "bdev_name": "Nvme3n1" 00:05:41.532 } 00:05:41.532 ]' 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.794 /dev/nbd1 00:05:41.794 /dev/nbd10 00:05:41.794 /dev/nbd11 00:05:41.794 /dev/nbd12 00:05:41.794 /dev/nbd13' 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.794 /dev/nbd1 00:05:41.794 /dev/nbd10 00:05:41.794 /dev/nbd11 00:05:41.794 /dev/nbd12 00:05:41.794 /dev/nbd13' 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:05:41.794 256+0 records in 00:05:41.794 256+0 records out 00:05:41.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498309 s, 210 MB/s 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.794 256+0 records in 00:05:41.794 256+0 records out 00:05:41.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.060605 s, 17.3 MB/s 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.794 02:53:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.794 256+0 records in 00:05:41.794 256+0 records out 00:05:41.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0625277 s, 16.8 MB/s 00:05:41.794 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.794 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:05:41.794 256+0 records in 00:05:41.794 256+0 records out 00:05:41.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0631484 s, 16.6 MB/s 00:05:41.794 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.794 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:05:42.055 256+0 records in 00:05:42.055 256+0 records out 00:05:42.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0642858 s, 16.3 MB/s 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:05:42.055 256+0 records in 00:05:42.055 256+0 records out 00:05:42.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0635743 s, 16.5 MB/s 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:05:42.055 256+0 records in 00:05:42.055 256+0 records out 00:05:42.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0651759 s, 16.1 MB/s 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.055 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.313 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.313 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.313 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.313 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.313 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.313 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.313 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:42.313 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.313 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.313 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.573 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.573 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.573 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.573 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.573 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.573 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.573 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:42.573 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.573 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.573 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:05:42.832 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:05:42.832 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:05:42.832 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:05:42.832 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.832 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.832 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:05:42.832 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:42.832 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.832 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.832 02:53:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:05:42.832 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:05:42.832 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:05:42.832 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:05:42.832 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.832 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.832 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:05:42.832 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:42.832 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.832 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.832 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:05:43.100 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:05:43.100 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:05:43.100 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:05:43.100 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.100 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.100 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:05:43.100 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:43.100 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.100 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.100 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:05:43.377 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:05:43.377 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:05:43.377 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:05:43.377 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.377 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.377 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:05:43.377 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:43.377 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.377 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.377 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.377 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.637 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.637 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.637 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.637 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.637 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.637 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.637 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:43.637 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.638 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.638 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.638 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.638 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.638 02:53:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:43.638 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.638 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:05:43.638 02:53:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:05:43.897 malloc_lvol_verify 00:05:43.897 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:05:43.897 a0b36ed6-889f-4c78-b0c9-288cf7cf755d 00:05:44.156 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:05:44.156 7269fe29-01cc-4652-9c70-5a1d04707405 00:05:44.156 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:05:44.417 /dev/nbd0 00:05:44.417 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:05:44.417 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:05:44.417 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:05:44.417 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:05:44.417 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:05:44.417 mke2fs 1.47.0 (5-Feb-2023) 00:05:44.417 Discarding device blocks: 0/4096 done 00:05:44.417 Creating filesystem with 4096 1k blocks and 1024 inodes 00:05:44.417 00:05:44.417 Allocating group tables: 0/1 done 00:05:44.417 Writing inode tables: 0/1 done 00:05:44.417 Creating journal (1024 blocks): done 00:05:44.417 Writing superblocks and filesystem accounting information: 0/1 done 00:05:44.417 00:05:44.417 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:44.417 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.417 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:05:44.417 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.417 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:44.417 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.417 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59994 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59994 ']' 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59994 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59994 00:05:44.678 killing process with pid 59994 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59994' 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59994 00:05:44.678 02:53:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59994 00:05:45.245 ************************************ 00:05:45.245 END TEST bdev_nbd 00:05:45.245 ************************************ 00:05:45.245 02:53:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:05:45.245 00:05:45.245 real 0m9.311s 00:05:45.245 user 0m13.547s 00:05:45.245 sys 0m2.934s 00:05:45.245 02:53:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.245 02:53:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:45.506 02:53:39 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:05:45.506 02:53:39 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:05:45.506 skipping fio tests on NVMe due to multi-ns failures. 00:05:45.506 02:53:39 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:05:45.506 02:53:39 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:45.506 02:53:39 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:45.506 02:53:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:05:45.506 02:53:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.506 02:53:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:45.506 ************************************ 00:05:45.506 START TEST bdev_verify 00:05:45.506 ************************************ 00:05:45.506 02:53:39 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:45.506 [2024-12-10 02:53:39.698563] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:45.506 [2024-12-10 02:53:39.698684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60361 ] 00:05:45.506 [2024-12-10 02:53:39.858610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.767 [2024-12-10 02:53:39.960179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.767 [2024-12-10 02:53:39.960321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.407 Running I/O for 5 seconds... 00:05:48.728 22528.00 IOPS, 88.00 MiB/s [2024-12-10T02:53:44.057Z] 23296.00 IOPS, 91.00 MiB/s [2024-12-10T02:53:44.989Z] 23552.00 IOPS, 92.00 MiB/s [2024-12-10T02:53:45.923Z] 23536.00 IOPS, 91.94 MiB/s [2024-12-10T02:53:45.923Z] 23500.80 IOPS, 91.80 MiB/s 00:05:51.535 Latency(us) 00:05:51.535 [2024-12-10T02:53:45.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:51.535 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:51.535 Verification LBA range: start 0x0 length 0xbd0bd 00:05:51.535 Nvme0n1 : 5.04 1956.59 7.64 0.00 0.00 65118.64 11191.53 72593.72 00:05:51.535 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:51.535 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:05:51.535 Nvme0n1 : 5.06 1921.78 7.51 0.00 0.00 66399.52 11796.48 75013.51 00:05:51.535 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:51.535 Verification LBA range: start 0x0 length 0xa0000 00:05:51.535 Nvme1n1 : 5.07 1968.51 7.69 0.00 0.00 64760.17 11544.42 61301.37 00:05:51.535 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:51.535 Verification LBA range: start 0xa0000 length 0xa0000 00:05:51.535 Nvme1n1 : 5.06 1921.24 7.50 0.00 0.00 66215.52 13913.80 66544.25 00:05:51.535 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:51.535 Verification LBA range: start 0x0 length 0x80000 00:05:51.535 Nvme2n1 : 5.07 1967.99 7.69 0.00 0.00 64674.54 10183.29 58881.58 00:05:51.535 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:51.535 Verification LBA range: start 0x80000 length 0x80000 00:05:51.535 Nvme2n1 : 5.07 1920.11 7.50 0.00 0.00 66094.48 15627.82 62914.56 00:05:51.535 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:51.535 Verification LBA range: start 0x0 length 0x80000 00:05:51.535 Nvme2n2 : 5.07 1967.43 7.69 0.00 0.00 64563.11 10384.94 58478.28 00:05:51.535 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:51.535 Verification LBA range: start 0x80000 length 0x80000 00:05:51.535 Nvme2n2 : 5.07 1919.54 7.50 0.00 0.00 65959.54 15022.87 64931.05 00:05:51.535 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:51.535 Verification LBA range: start 0x0 length 0x80000 00:05:51.535 Nvme2n3 : 5.08 1966.21 7.68 0.00 0.00 64455.86 12552.66 60494.77 00:05:51.535 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:51.535 Verification LBA range: start 0x80000 length 0x80000 00:05:51.535 Nvme2n3 : 5.08 1928.40 7.53 0.00 0.00 65544.16 2860.90 65737.65 00:05:51.535 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:51.535 Verification LBA range: start 0x0 length 0x20000 00:05:51.535 Nvme3n1 : 5.08 1964.72 7.67 0.00 0.00 64348.60 9124.63 62511.26 00:05:51.535 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:51.535 Verification LBA range: start 0x20000 length 0x20000 00:05:51.535 Nvme3n1 : 5.09 1936.68 7.57 0.00 0.00 65220.95 6553.60 69770.63 00:05:51.535 [2024-12-10T02:53:45.923Z] =================================================================================================================== 00:05:51.535 [2024-12-10T02:53:45.923Z] Total : 23339.22 91.17 0.00 0.00 65271.72 2860.90 75013.51 00:05:52.534 00:05:52.534 real 0m7.238s 00:05:52.534 user 0m13.576s 00:05:52.534 sys 0m0.206s 00:05:52.534 02:53:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.534 02:53:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:05:52.534 ************************************ 00:05:52.534 END TEST bdev_verify 00:05:52.534 ************************************ 00:05:52.534 02:53:46 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:52.534 02:53:46 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:05:52.534 02:53:46 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.534 02:53:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:52.791 ************************************ 00:05:52.791 START TEST bdev_verify_big_io 00:05:52.791 ************************************ 00:05:52.791 02:53:46 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:52.791 [2024-12-10 02:53:46.982232] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:52.792 [2024-12-10 02:53:46.982359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60459 ] 00:05:52.792 [2024-12-10 02:53:47.143199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.049 [2024-12-10 02:53:47.244587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.049 [2024-12-10 02:53:47.244863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.615 Running I/O for 5 seconds... 00:05:59.450 704.00 IOPS, 44.00 MiB/s [2024-12-10T02:53:54.095Z] 2136.00 IOPS, 133.50 MiB/s [2024-12-10T02:53:54.095Z] 2557.67 IOPS, 159.85 MiB/s 00:05:59.707 Latency(us) 00:05:59.707 [2024-12-10T02:53:54.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:59.708 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:59.708 Verification LBA range: start 0x0 length 0xbd0b 00:05:59.708 Nvme0n1 : 5.62 113.79 7.11 0.00 0.00 1086542.45 13611.32 1238932.87 00:05:59.708 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:59.708 Verification LBA range: start 0xbd0b length 0xbd0b 00:05:59.708 Nvme0n1 : 5.56 115.11 7.19 0.00 0.00 1068590.87 18450.90 1219574.55 00:05:59.708 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:59.708 Verification LBA range: start 0x0 length 0xa000 00:05:59.708 Nvme1n1 : 5.74 115.51 7.22 0.00 0.00 1024044.14 105664.20 1025991.29 00:05:59.708 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:59.708 Verification LBA range: start 0xa000 length 0xa000 00:05:59.708 Nvme1n1 : 5.74 115.63 7.23 0.00 0.00 1016023.92 96388.33 1019538.51 00:05:59.708 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:59.708 Verification LBA range: start 0x0 length 0x8000 00:05:59.708 Nvme2n1 : 5.81 121.20 7.57 0.00 0.00 948459.73 66140.95 845313.58 00:05:59.708 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:59.708 Verification LBA range: start 0x8000 length 0x8000 00:05:59.708 Nvme2n1 : 5.94 124.92 7.81 0.00 0.00 921820.71 61704.66 1045349.61 00:05:59.708 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:59.708 Verification LBA range: start 0x0 length 0x8000 00:05:59.708 Nvme2n2 : 5.88 123.91 7.74 0.00 0.00 896455.48 64124.46 1096971.82 00:05:59.708 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:59.708 Verification LBA range: start 0x8000 length 0x8000 00:05:59.708 Nvme2n2 : 5.94 124.56 7.79 0.00 0.00 888892.70 63317.86 1084066.26 00:05:59.708 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:59.708 Verification LBA range: start 0x0 length 0x8000 00:05:59.708 Nvme2n3 : 5.99 130.90 8.18 0.00 0.00 821956.34 42749.64 2155226.98 00:05:59.708 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:59.708 Verification LBA range: start 0x8000 length 0x8000 00:05:59.708 Nvme2n3 : 6.02 131.72 8.23 0.00 0.00 815095.43 42144.69 1342177.28 00:05:59.708 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:59.708 Verification LBA range: start 0x0 length 0x2000 00:05:59.708 Nvme3n1 : 6.06 150.80 9.42 0.00 0.00 690060.93 759.34 2193943.63 00:05:59.708 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:59.708 Verification LBA range: start 0x2000 length 0x2000 00:05:59.708 Nvme3n1 : 6.06 151.63 9.48 0.00 0.00 687210.87 1531.27 1103424.59 00:05:59.708 [2024-12-10T02:53:54.096Z] =================================================================================================================== 00:05:59.708 [2024-12-10T02:53:54.096Z] Total : 1519.66 94.98 0.00 0.00 889936.20 759.34 2193943.63 00:06:01.079 00:06:01.079 real 0m8.375s 00:06:01.079 user 0m15.856s 00:06:01.079 sys 0m0.232s 00:06:01.079 02:53:55 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.079 02:53:55 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:01.079 ************************************ 00:06:01.079 END TEST bdev_verify_big_io 00:06:01.079 ************************************ 00:06:01.079 02:53:55 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:01.079 02:53:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:01.079 02:53:55 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.079 02:53:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:01.079 ************************************ 00:06:01.079 START TEST bdev_write_zeroes 00:06:01.079 ************************************ 00:06:01.079 02:53:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:01.079 [2024-12-10 02:53:55.397656] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:01.079 [2024-12-10 02:53:55.397770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60570 ] 00:06:01.336 [2024-12-10 02:53:55.560075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.336 [2024-12-10 02:53:55.659972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.904 Running I/O for 1 seconds... 00:06:03.286 37345.00 IOPS, 145.88 MiB/s 00:06:03.286 Latency(us) 00:06:03.286 [2024-12-10T02:53:57.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:03.286 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:03.286 Nvme0n1 : 1.02 6029.09 23.55 0.00 0.00 21187.22 8872.57 345223.48 00:06:03.286 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:03.286 Nvme1n1 : 1.02 6447.23 25.18 0.00 0.00 19789.57 8771.74 204875.62 00:06:03.286 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:03.286 Nvme2n1 : 1.02 6337.02 24.75 0.00 0.00 20071.80 8721.33 197616.25 00:06:03.286 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:03.286 Nvme2n2 : 1.02 6329.86 24.73 0.00 0.00 20011.11 8973.39 194389.86 00:06:03.286 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:03.286 Nvme2n3 : 1.02 6322.66 24.70 0.00 0.00 20006.34 8922.98 193583.26 00:06:03.286 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:03.286 Nvme3n1 : 1.02 6252.96 24.43 0.00 0.00 20210.98 8418.86 204875.62 00:06:03.286 [2024-12-10T02:53:57.674Z] =================================================================================================================== 00:06:03.286 [2024-12-10T02:53:57.674Z] Total : 37718.82 147.34 0.00 0.00 20203.38 8418.86 345223.48 00:06:03.909 00:06:03.909 real 0m2.676s 00:06:03.909 user 0m2.383s 00:06:03.909 sys 0m0.181s 00:06:03.909 02:53:58 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.909 02:53:58 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:03.909 ************************************ 00:06:03.909 END TEST bdev_write_zeroes 00:06:03.909 ************************************ 00:06:03.909 02:53:58 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:03.909 02:53:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:03.909 02:53:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.909 02:53:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:03.909 ************************************ 00:06:03.909 START TEST bdev_json_nonenclosed 00:06:03.909 ************************************ 00:06:03.909 02:53:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:03.909 [2024-12-10 02:53:58.110370] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:03.909 [2024-12-10 02:53:58.110504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60625 ] 00:06:03.909 [2024-12-10 02:53:58.269488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.170 [2024-12-10 02:53:58.370136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.170 [2024-12-10 02:53:58.370214] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:04.170 [2024-12-10 02:53:58.370230] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:04.170 [2024-12-10 02:53:58.370239] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.170 00:06:04.170 real 0m0.499s 00:06:04.170 user 0m0.303s 00:06:04.170 sys 0m0.093s 00:06:04.170 02:53:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.170 02:53:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:04.170 ************************************ 00:06:04.170 END TEST bdev_json_nonenclosed 00:06:04.170 ************************************ 00:06:04.431 02:53:58 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:04.431 02:53:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:04.431 02:53:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.431 02:53:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.431 ************************************ 00:06:04.431 START TEST bdev_json_nonarray 00:06:04.431 ************************************ 00:06:04.431 02:53:58 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:04.431 [2024-12-10 02:53:58.649679] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:04.431 [2024-12-10 02:53:58.649787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60645 ] 00:06:04.431 [2024-12-10 02:53:58.807950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.692 [2024-12-10 02:53:58.910593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.692 [2024-12-10 02:53:58.910690] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:04.692 [2024-12-10 02:53:58.910707] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:04.692 [2024-12-10 02:53:58.910717] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.953 00:06:04.953 real 0m0.498s 00:06:04.953 user 0m0.312s 00:06:04.953 sys 0m0.081s 00:06:04.953 02:53:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.953 02:53:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:04.953 ************************************ 00:06:04.953 END TEST bdev_json_nonarray 00:06:04.953 ************************************ 00:06:04.953 02:53:59 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:04.953 02:53:59 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:04.953 02:53:59 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:04.953 02:53:59 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:04.953 02:53:59 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:04.953 02:53:59 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:04.953 02:53:59 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:04.953 02:53:59 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:04.953 02:53:59 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:04.953 02:53:59 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:04.953 02:53:59 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:04.953 00:06:04.953 real 0m35.720s 00:06:04.953 user 0m55.942s 00:06:04.953 sys 0m4.852s 00:06:04.953 02:53:59 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.953 02:53:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.953 ************************************ 00:06:04.953 END TEST blockdev_nvme 00:06:04.953 ************************************ 00:06:04.953 02:53:59 -- spdk/autotest.sh@209 -- # uname -s 00:06:04.953 02:53:59 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:04.953 02:53:59 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:04.953 02:53:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:04.953 02:53:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.953 02:53:59 -- common/autotest_common.sh@10 -- # set +x 00:06:04.953 ************************************ 00:06:04.953 START TEST blockdev_nvme_gpt 00:06:04.953 ************************************ 00:06:04.953 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:04.953 * Looking for test storage... 00:06:04.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:04.953 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:04.953 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:04.953 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:04.953 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.953 02:53:59 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:04.953 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.953 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:04.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.953 --rc genhtml_branch_coverage=1 00:06:04.953 --rc genhtml_function_coverage=1 00:06:04.953 --rc genhtml_legend=1 00:06:04.953 --rc geninfo_all_blocks=1 00:06:04.953 --rc geninfo_unexecuted_blocks=1 00:06:04.953 00:06:04.953 ' 00:06:04.953 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:04.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.953 --rc genhtml_branch_coverage=1 00:06:04.953 --rc genhtml_function_coverage=1 00:06:04.953 --rc genhtml_legend=1 00:06:04.953 --rc geninfo_all_blocks=1 00:06:04.954 --rc geninfo_unexecuted_blocks=1 00:06:04.954 00:06:04.954 ' 00:06:04.954 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:04.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.954 --rc genhtml_branch_coverage=1 00:06:04.954 --rc genhtml_function_coverage=1 00:06:04.954 --rc genhtml_legend=1 00:06:04.954 --rc geninfo_all_blocks=1 00:06:04.954 --rc geninfo_unexecuted_blocks=1 00:06:04.954 00:06:04.954 ' 00:06:04.954 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:04.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.954 --rc genhtml_branch_coverage=1 00:06:04.954 --rc genhtml_function_coverage=1 00:06:04.954 --rc genhtml_legend=1 00:06:04.954 --rc geninfo_all_blocks=1 00:06:04.954 --rc geninfo_unexecuted_blocks=1 00:06:04.954 00:06:04.954 ' 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60729 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60729 00:06:04.954 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60729 ']' 00:06:04.954 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.954 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.954 02:53:59 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:04.954 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.954 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.954 02:53:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:05.216 [2024-12-10 02:53:59.376721] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:05.216 [2024-12-10 02:53:59.376840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60729 ] 00:06:05.216 [2024-12-10 02:53:59.537528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.477 [2024-12-10 02:53:59.637066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.042 02:54:00 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.042 02:54:00 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:06.042 02:54:00 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:06.042 02:54:00 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:06.042 02:54:00 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:06.300 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:06.300 Waiting for block devices as requested 00:06:06.300 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:06.558 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:06.558 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:06.558 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:11.818 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:11.818 BYT; 00:06:11.818 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:11.818 BYT; 00:06:11.818 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:11.818 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:11.818 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:11.818 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:11.818 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:11.819 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:11.819 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:11.819 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:11.819 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:11.819 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:11.819 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:11.819 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:11.819 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:11.819 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:11.819 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:11.819 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:11.819 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:11.819 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:11.819 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:11.819 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:11.819 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:11.819 02:54:05 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:11.819 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:11.819 02:54:05 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:12.753 The operation has completed successfully. 00:06:12.753 02:54:06 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:13.686 The operation has completed successfully. 00:06:13.686 02:54:08 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:14.251 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:14.509 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:14.509 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:14.509 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:14.767 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:14.767 02:54:08 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:14.767 02:54:08 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.767 02:54:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:14.767 [] 00:06:14.767 02:54:08 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.767 02:54:08 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:14.767 02:54:08 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:14.767 02:54:08 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:14.767 02:54:08 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:14.767 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:14.767 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.767 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:15.026 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.026 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:15.026 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.026 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:15.026 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.026 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:15.026 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:15.026 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.026 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:15.026 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.026 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:15.026 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.026 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:15.026 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.026 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:15.026 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.026 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:15.026 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.026 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:15.026 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:15.026 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.026 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:15.026 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:15.285 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.285 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:15.285 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:15.285 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3e5c9823-2e6f-44fa-a087-b83b034fe3d6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3e5c9823-2e6f-44fa-a087-b83b034fe3d6",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "6891648a-7f98-4010-8be0-94360526d5b0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6891648a-7f98-4010-8be0-94360526d5b0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "9c77286c-02e3-4851-98fc-ed6837254054"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9c77286c-02e3-4851-98fc-ed6837254054",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "55897935-9fa7-4a1d-bd01-d4a2d9a9527a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "55897935-9fa7-4a1d-bd01-d4a2d9a9527a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "62647924-9ec1-46aa-ba08-c90a9966ad98"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "62647924-9ec1-46aa-ba08-c90a9966ad98",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:15.285 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:15.285 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:15.285 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:15.285 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60729 00:06:15.286 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60729 ']' 00:06:15.286 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60729 00:06:15.286 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:15.286 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.286 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60729 00:06:15.286 killing process with pid 60729 00:06:15.286 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.286 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.286 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60729' 00:06:15.286 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60729 00:06:15.286 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60729 00:06:16.659 02:54:11 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:16.659 02:54:11 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:16.659 02:54:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:16.659 02:54:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.659 02:54:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:16.659 ************************************ 00:06:16.659 START TEST bdev_hello_world 00:06:16.659 ************************************ 00:06:16.659 02:54:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:16.917 [2024-12-10 02:54:11.082283] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:16.917 [2024-12-10 02:54:11.082463] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61347 ] 00:06:16.917 [2024-12-10 02:54:11.253130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.208 [2024-12-10 02:54:11.354879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.806 [2024-12-10 02:54:11.906182] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:17.806 [2024-12-10 02:54:11.906235] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:17.806 [2024-12-10 02:54:11.906258] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:17.806 [2024-12-10 02:54:11.908743] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:17.806 [2024-12-10 02:54:11.909147] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:17.806 [2024-12-10 02:54:11.909204] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:17.806 [2024-12-10 02:54:11.909347] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:17.806 00:06:17.806 [2024-12-10 02:54:11.909368] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:18.373 00:06:18.373 real 0m1.644s 00:06:18.373 user 0m1.358s 00:06:18.373 sys 0m0.178s 00:06:18.373 ************************************ 00:06:18.373 END TEST bdev_hello_world 00:06:18.373 ************************************ 00:06:18.373 02:54:12 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.373 02:54:12 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:18.373 02:54:12 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:18.373 02:54:12 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:18.373 02:54:12 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.373 02:54:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:18.373 ************************************ 00:06:18.373 START TEST bdev_bounds 00:06:18.373 ************************************ 00:06:18.373 02:54:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:18.373 02:54:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61388 00:06:18.373 02:54:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.373 02:54:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61388' 00:06:18.373 Process bdevio pid: 61388 00:06:18.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.373 02:54:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61388 00:06:18.373 02:54:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61388 ']' 00:06:18.373 02:54:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.373 02:54:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.373 02:54:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.373 02:54:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.373 02:54:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:18.373 02:54:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:18.631 [2024-12-10 02:54:12.758465] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:18.631 [2024-12-10 02:54:12.758949] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61388 ] 00:06:18.631 [2024-12-10 02:54:12.918993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.889 [2024-12-10 02:54:13.026529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.889 [2024-12-10 02:54:13.026666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.889 [2024-12-10 02:54:13.026857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.489 02:54:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.489 02:54:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:19.489 02:54:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:19.489 I/O targets: 00:06:19.489 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:19.489 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:19.489 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:19.489 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:19.489 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:19.489 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:19.489 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:19.489 00:06:19.489 00:06:19.489 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.489 http://cunit.sourceforge.net/ 00:06:19.489 00:06:19.489 00:06:19.489 Suite: bdevio tests on: Nvme3n1 00:06:19.489 Test: blockdev write read block ...passed 00:06:19.489 Test: blockdev write zeroes read block ...passed 00:06:19.489 Test: blockdev write zeroes read no split ...passed 00:06:19.489 Test: blockdev write zeroes read split ...passed 00:06:19.489 Test: blockdev write zeroes read split partial ...passed 00:06:19.489 Test: blockdev reset ...[2024-12-10 02:54:13.737096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:19.489 [2024-12-10 02:54:13.740932] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:19.489 passed 00:06:19.489 Test: blockdev write read 8 blocks ...passed 00:06:19.489 Test: blockdev write read size > 128k ...passed 00:06:19.489 Test: blockdev write read invalid size ...passed 00:06:19.489 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:19.489 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:19.489 Test: blockdev write read max offset ...passed 00:06:19.489 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:19.489 Test: blockdev writev readv 8 blocks ...passed 00:06:19.489 Test: blockdev writev readv 30 x 1block ...passed 00:06:19.489 Test: blockdev writev readv block ...passed 00:06:19.489 Test: blockdev writev readv size > 128k ...passed 00:06:19.489 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:19.489 Test: blockdev comparev and writev ...[2024-12-10 02:54:13.748480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x295c04000 len:0x1000 00:06:19.489 [2024-12-10 02:54:13.748633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:06:19.489 Test: blockdev nvme passthru rw ...passed 00:06:19.489 Test: blockdev nvme passthru vendor specific ...0 sqhd:0018 p:1 m:0 dnr:1 00:06:19.489 [2024-12-10 02:54:13.749263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:19.489 [2024-12-10 02:54:13.749291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:19.489 passed 00:06:19.489 Test: blockdev nvme admin passthru ...passed 00:06:19.489 Test: blockdev copy ...passed 00:06:19.489 Suite: bdevio tests on: Nvme2n3 00:06:19.489 Test: blockdev write read block ...passed 00:06:19.489 Test: blockdev write zeroes read block ...passed 00:06:19.489 Test: blockdev write zeroes read no split ...passed 00:06:19.489 Test: blockdev write zeroes read split ...passed 00:06:19.489 Test: blockdev write zeroes read split partial ...passed 00:06:19.489 Test: blockdev reset ...[2024-12-10 02:54:13.798044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:19.489 passed 00:06:19.489 Test: blockdev write read 8 blocks ...[2024-12-10 02:54:13.801305] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:19.489 passed 00:06:19.489 Test: blockdev write read size > 128k ...passed 00:06:19.489 Test: blockdev write read invalid size ...passed 00:06:19.489 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:19.489 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:19.489 Test: blockdev write read max offset ...passed 00:06:19.489 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:19.489 Test: blockdev writev readv 8 blocks ...passed 00:06:19.489 Test: blockdev writev readv 30 x 1block ...passed 00:06:19.489 Test: blockdev writev readv block ...passed 00:06:19.489 Test: blockdev writev readv size > 128k ...passed 00:06:19.489 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:19.489 Test: blockdev comparev and writev ...[2024-12-10 02:54:13.806263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x295c02000 len:0x1000 00:06:19.489 [2024-12-10 02:54:13.806313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:19.489 passed 00:06:19.489 Test: blockdev nvme passthru rw ...passed 00:06:19.489 Test: blockdev nvme passthru vendor specific ...passed 00:06:19.489 Test: blockdev nvme admin passthru ...[2024-12-10 02:54:13.806861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:19.489 [2024-12-10 02:54:13.806891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:19.489 passed 00:06:19.489 Test: blockdev copy ...passed 00:06:19.489 Suite: bdevio tests on: Nvme2n2 00:06:19.489 Test: blockdev write read block ...passed 00:06:19.489 Test: blockdev write zeroes read block ...passed 00:06:19.489 Test: blockdev write zeroes read no split ...passed 00:06:19.489 Test: blockdev write zeroes read split ...passed 00:06:19.489 Test: blockdev write zeroes read split partial ...passed 00:06:19.489 Test: blockdev reset ...[2024-12-10 02:54:13.848649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:19.489 [2024-12-10 02:54:13.851849] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:19.489 passed 00:06:19.489 Test: blockdev write read 8 blocks ...passed 00:06:19.489 Test: blockdev write read size > 128k ...passed 00:06:19.489 Test: blockdev write read invalid size ...passed 00:06:19.489 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:19.489 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:19.489 Test: blockdev write read max offset ...passed 00:06:19.489 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:19.489 Test: blockdev writev readv 8 blocks ...passed 00:06:19.489 Test: blockdev writev readv 30 x 1block ...passed 00:06:19.489 Test: blockdev writev readv block ...passed 00:06:19.489 Test: blockdev writev readv size > 128k ...passed 00:06:19.489 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:19.489 Test: blockdev comparev and writev ...[2024-12-10 02:54:13.858237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9838000 len:0x1000 00:06:19.489 [2024-12-10 02:54:13.858402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:19.489 passed 00:06:19.490 Test: blockdev nvme passthru rw ...passed 00:06:19.490 Test: blockdev nvme passthru vendor specific ...[2024-12-10 02:54:13.859107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:19.490 [2024-12-10 02:54:13.859205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:19.490 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:19.490 passed 00:06:19.490 Test: blockdev copy ...passed 00:06:19.490 Suite: bdevio tests on: Nvme2n1 00:06:19.490 Test: blockdev write read block ...passed 00:06:19.750 Test: blockdev write zeroes read block ...passed 00:06:19.750 Test: blockdev write zeroes read no split ...passed 00:06:19.750 Test: blockdev write zeroes read split ...passed 00:06:19.750 Test: blockdev write zeroes read split partial ...passed 00:06:19.750 Test: blockdev reset ...[2024-12-10 02:54:13.913401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:19.750 [2024-12-10 02:54:13.916447] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:19.750 passed 00:06:19.750 Test: blockdev write read 8 blocks ...passed 00:06:19.750 Test: blockdev write read size > 128k ...passed 00:06:19.750 Test: blockdev write read invalid size ...passed 00:06:19.750 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:19.750 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:19.750 Test: blockdev write read max offset ...passed 00:06:19.750 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:19.750 Test: blockdev writev readv 8 blocks ...passed 00:06:19.750 Test: blockdev writev readv 30 x 1block ...passed 00:06:19.750 Test: blockdev writev readv block ...passed 00:06:19.750 Test: blockdev writev readv size > 128k ...passed 00:06:19.750 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:19.750 Test: blockdev comparev and writev ...[2024-12-10 02:54:13.921964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9834000 len:0x1000 00:06:19.750 [2024-12-10 02:54:13.922016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:19.750 passed 00:06:19.750 Test: blockdev nvme passthru rw ...passed 00:06:19.750 Test: blockdev nvme passthru vendor specific ...passed 00:06:19.750 Test: blockdev nvme admin passthru ...[2024-12-10 02:54:13.922573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:19.750 [2024-12-10 02:54:13.922603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:19.750 passed 00:06:19.750 Test: blockdev copy ...passed 00:06:19.750 Suite: bdevio tests on: Nvme1n1p2 00:06:19.750 Test: blockdev write read block ...passed 00:06:19.750 Test: blockdev write zeroes read block ...passed 00:06:19.750 Test: blockdev write zeroes read no split ...passed 00:06:19.750 Test: blockdev write zeroes read split ...passed 00:06:19.750 Test: blockdev write zeroes read split partial ...passed 00:06:19.750 Test: blockdev reset ...[2024-12-10 02:54:13.964153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:19.750 [2024-12-10 02:54:13.966888] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:19.750 passed 00:06:19.750 Test: blockdev write read 8 blocks ...passed 00:06:19.750 Test: blockdev write read size > 128k ...passed 00:06:19.750 Test: blockdev write read invalid size ...passed 00:06:19.750 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:19.751 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:19.751 Test: blockdev write read max offset ...passed 00:06:19.751 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:19.751 Test: blockdev writev readv 8 blocks ...passed 00:06:19.751 Test: blockdev writev readv 30 x 1block ...passed 00:06:19.751 Test: blockdev writev readv block ...passed 00:06:19.751 Test: blockdev writev readv size > 128k ...passed 00:06:19.751 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:19.751 Test: blockdev comparev and writev ...[2024-12-10 02:54:13.973352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c9830000 len:0x1000 00:06:19.751 [2024-12-10 02:54:13.973519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:19.751 passed 00:06:19.751 Test: blockdev nvme passthru rw ...passed 00:06:19.751 Test: blockdev nvme passthru vendor specific ...passed 00:06:19.751 Test: blockdev nvme admin passthru ...passed 00:06:19.751 Test: blockdev copy ...passed 00:06:19.751 Suite: bdevio tests on: Nvme1n1p1 00:06:19.751 Test: blockdev write read block ...passed 00:06:19.751 Test: blockdev write zeroes read block ...passed 00:06:19.751 Test: blockdev write zeroes read no split ...passed 00:06:19.751 Test: blockdev write zeroes read split ...passed 00:06:19.751 Test: blockdev write zeroes read split partial ...passed 00:06:19.751 Test: blockdev reset ...[2024-12-10 02:54:14.015332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:19.751 [2024-12-10 02:54:14.018192] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:19.751 passed 00:06:19.751 Test: blockdev write read 8 blocks ...passed 00:06:19.751 Test: blockdev write read size > 128k ...passed 00:06:19.751 Test: blockdev write read invalid size ...passed 00:06:19.751 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:19.751 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:19.751 Test: blockdev write read max offset ...passed 00:06:19.751 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:19.751 Test: blockdev writev readv 8 blocks ...passed 00:06:19.751 Test: blockdev writev readv 30 x 1block ...passed 00:06:19.751 Test: blockdev writev readv block ...passed 00:06:19.751 Test: blockdev writev readv size > 128k ...passed 00:06:19.751 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:19.751 Test: blockdev comparev and writev ...[2024-12-10 02:54:14.026390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x29660e000 len:0x1000 00:06:19.751 [2024-12-10 02:54:14.026434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:19.751 passed 00:06:19.751 Test: blockdev nvme passthru rw ...passed 00:06:19.751 Test: blockdev nvme passthru vendor specific ...passed 00:06:19.751 Test: blockdev nvme admin passthru ...passed 00:06:19.751 Test: blockdev copy ...passed 00:06:19.751 Suite: bdevio tests on: Nvme0n1 00:06:19.751 Test: blockdev write read block ...passed 00:06:19.751 Test: blockdev write zeroes read block ...passed 00:06:19.751 Test: blockdev write zeroes read no split ...passed 00:06:19.751 Test: blockdev write zeroes read split ...passed 00:06:19.751 Test: blockdev write zeroes read split partial ...passed 00:06:19.751 Test: blockdev reset ...[2024-12-10 02:54:14.074209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:19.751 passed 00:06:19.751 Test: blockdev write read 8 blocks ...[2024-12-10 02:54:14.076897] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:19.751 passed 00:06:19.751 Test: blockdev write read size > 128k ...passed 00:06:19.751 Test: blockdev write read invalid size ...passed 00:06:19.751 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:19.751 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:19.751 Test: blockdev write read max offset ...passed 00:06:19.751 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:19.751 Test: blockdev writev readv 8 blocks ...passed 00:06:19.751 Test: blockdev writev readv 30 x 1block ...passed 00:06:19.751 Test: blockdev writev readv block ...passed 00:06:19.751 Test: blockdev writev readv size > 128k ...passed 00:06:19.751 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:19.751 Test: blockdev comparev and writev ...passed 00:06:19.751 Test: blockdev nvme passthru rw ...[2024-12-10 02:54:14.081970] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:19.751 separate metadata which is not supported yet. 00:06:19.751 passed 00:06:19.751 Test: blockdev nvme passthru vendor specific ...[2024-12-10 02:54:14.082419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:19.751 passed 00:06:19.751 Test: blockdev nvme admin passthru ...[2024-12-10 02:54:14.082456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:19.751 passed 00:06:19.751 Test: blockdev copy ...passed 00:06:19.751 00:06:19.751 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.751 suites 7 7 n/a 0 0 00:06:19.751 tests 161 161 161 0 0 00:06:19.751 asserts 1025 1025 1025 0 n/a 00:06:19.751 00:06:19.751 Elapsed time = 1.034 seconds 00:06:19.751 0 00:06:19.751 02:54:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61388 00:06:19.751 02:54:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61388 ']' 00:06:19.751 02:54:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61388 00:06:19.751 02:54:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:19.751 02:54:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.751 02:54:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61388 00:06:19.751 killing process with pid 61388 00:06:19.751 02:54:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.751 02:54:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.751 02:54:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61388' 00:06:19.751 02:54:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61388 00:06:19.751 02:54:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61388 00:06:21.692 02:54:15 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:21.692 00:06:21.692 real 0m2.855s 00:06:21.692 user 0m7.605s 00:06:21.692 sys 0m0.314s 00:06:21.692 ************************************ 00:06:21.692 END TEST bdev_bounds 00:06:21.692 ************************************ 00:06:21.692 02:54:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.692 02:54:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:21.692 02:54:15 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:21.692 02:54:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:21.692 02:54:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.692 02:54:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:21.693 ************************************ 00:06:21.693 START TEST bdev_nbd 00:06:21.693 ************************************ 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61448 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:21.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61448 /var/tmp/spdk-nbd.sock 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61448 ']' 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:21.693 02:54:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:21.693 [2024-12-10 02:54:15.663125] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:21.693 [2024-12-10 02:54:15.663247] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.693 [2024-12-10 02:54:15.821295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.693 [2024-12-10 02:54:15.925218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.282 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.282 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:22.282 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:22.282 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.282 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:22.282 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:22.282 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:22.282 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.282 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:22.282 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:22.282 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:22.282 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:22.282 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:22.282 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:22.282 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.540 1+0 records in 00:06:22.540 1+0 records out 00:06:22.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361286 s, 11.3 MB/s 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:22.540 02:54:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.797 1+0 records in 00:06:22.797 1+0 records out 00:06:22.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387968 s, 10.6 MB/s 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.797 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:22.798 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:22.798 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:23.055 1+0 records in 00:06:23.055 1+0 records out 00:06:23.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578168 s, 7.1 MB/s 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:23.055 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:23.312 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:23.312 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:23.312 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:23.312 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:23.312 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:23.312 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.312 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.312 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:23.312 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:23.312 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.312 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.313 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:23.313 1+0 records in 00:06:23.313 1+0 records out 00:06:23.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060227 s, 6.8 MB/s 00:06:23.313 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.313 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:23.313 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.313 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.313 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:23.313 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:23.313 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:23.313 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:23.571 1+0 records in 00:06:23.571 1+0 records out 00:06:23.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479961 s, 8.5 MB/s 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:23.571 02:54:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:23.910 1+0 records in 00:06:23.910 1+0 records out 00:06:23.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491771 s, 8.3 MB/s 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:23.910 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:24.168 1+0 records in 00:06:24.168 1+0 records out 00:06:24.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369407 s, 11.1 MB/s 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:24.168 { 00:06:24.168 "nbd_device": "/dev/nbd0", 00:06:24.168 "bdev_name": "Nvme0n1" 00:06:24.168 }, 00:06:24.168 { 00:06:24.168 "nbd_device": "/dev/nbd1", 00:06:24.168 "bdev_name": "Nvme1n1p1" 00:06:24.168 }, 00:06:24.168 { 00:06:24.168 "nbd_device": "/dev/nbd2", 00:06:24.168 "bdev_name": "Nvme1n1p2" 00:06:24.168 }, 00:06:24.168 { 00:06:24.168 "nbd_device": "/dev/nbd3", 00:06:24.168 "bdev_name": "Nvme2n1" 00:06:24.168 }, 00:06:24.168 { 00:06:24.168 "nbd_device": "/dev/nbd4", 00:06:24.168 "bdev_name": "Nvme2n2" 00:06:24.168 }, 00:06:24.168 { 00:06:24.168 "nbd_device": "/dev/nbd5", 00:06:24.168 "bdev_name": "Nvme2n3" 00:06:24.168 }, 00:06:24.168 { 00:06:24.168 "nbd_device": "/dev/nbd6", 00:06:24.168 "bdev_name": "Nvme3n1" 00:06:24.168 } 00:06:24.168 ]' 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:24.168 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:24.168 { 00:06:24.168 "nbd_device": "/dev/nbd0", 00:06:24.168 "bdev_name": "Nvme0n1" 00:06:24.168 }, 00:06:24.168 { 00:06:24.168 "nbd_device": "/dev/nbd1", 00:06:24.168 "bdev_name": "Nvme1n1p1" 00:06:24.168 }, 00:06:24.168 { 00:06:24.168 "nbd_device": "/dev/nbd2", 00:06:24.168 "bdev_name": "Nvme1n1p2" 00:06:24.168 }, 00:06:24.168 { 00:06:24.168 "nbd_device": "/dev/nbd3", 00:06:24.168 "bdev_name": "Nvme2n1" 00:06:24.168 }, 00:06:24.168 { 00:06:24.168 "nbd_device": "/dev/nbd4", 00:06:24.168 "bdev_name": "Nvme2n2" 00:06:24.168 }, 00:06:24.168 { 00:06:24.168 "nbd_device": "/dev/nbd5", 00:06:24.168 "bdev_name": "Nvme2n3" 00:06:24.168 }, 00:06:24.168 { 00:06:24.168 "nbd_device": "/dev/nbd6", 00:06:24.168 "bdev_name": "Nvme3n1" 00:06:24.168 } 00:06:24.168 ]' 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.426 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.684 02:54:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.684 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.684 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.684 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.684 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.684 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.684 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.684 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.684 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.684 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:24.941 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:24.941 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:24.941 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:24.941 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.941 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.941 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:24.942 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.942 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.942 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.942 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:25.200 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:25.200 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:25.200 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:25.200 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.200 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.200 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:25.200 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.200 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.200 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.200 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:25.459 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:25.459 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:25.459 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:25.459 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.459 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.459 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:25.459 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.459 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.459 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.459 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:25.717 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:25.717 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:25.717 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:25.717 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.717 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.717 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:25.717 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.717 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.717 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.717 02:54:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:25.975 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:25.975 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:25.975 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:25.975 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.975 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.975 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:25.975 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.975 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.975 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.975 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.975 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.975 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.975 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.975 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:26.233 /dev/nbd0 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.233 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.234 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:26.234 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.234 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.234 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.234 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.234 1+0 records in 00:06:26.234 1+0 records out 00:06:26.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303559 s, 13.5 MB/s 00:06:26.234 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.234 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.234 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.234 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.234 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.234 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.234 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:26.234 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:26.492 /dev/nbd1 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.492 1+0 records in 00:06:26.492 1+0 records out 00:06:26.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465542 s, 8.8 MB/s 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:26.492 02:54:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:26.750 /dev/nbd10 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.750 1+0 records in 00:06:26.750 1+0 records out 00:06:26.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383452 s, 10.7 MB/s 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:26.750 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:27.010 /dev/nbd11 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.010 1+0 records in 00:06:27.010 1+0 records out 00:06:27.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337563 s, 12.1 MB/s 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:27.010 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:27.268 /dev/nbd12 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.268 1+0 records in 00:06:27.268 1+0 records out 00:06:27.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564865 s, 7.3 MB/s 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:27.268 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:27.526 /dev/nbd13 00:06:27.526 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:27.526 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:27.526 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:27.526 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:27.526 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.526 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.526 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:27.527 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:27.527 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.527 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.527 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.527 1+0 records in 00:06:27.527 1+0 records out 00:06:27.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409866 s, 10.0 MB/s 00:06:27.527 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.527 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.527 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.527 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.527 02:54:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.527 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.527 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:27.527 02:54:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:27.785 /dev/nbd14 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.785 1+0 records in 00:06:27.785 1+0 records out 00:06:27.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509572 s, 8.0 MB/s 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.785 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.043 { 00:06:28.043 "nbd_device": "/dev/nbd0", 00:06:28.043 "bdev_name": "Nvme0n1" 00:06:28.043 }, 00:06:28.043 { 00:06:28.043 "nbd_device": "/dev/nbd1", 00:06:28.043 "bdev_name": "Nvme1n1p1" 00:06:28.043 }, 00:06:28.043 { 00:06:28.043 "nbd_device": "/dev/nbd10", 00:06:28.043 "bdev_name": "Nvme1n1p2" 00:06:28.043 }, 00:06:28.043 { 00:06:28.043 "nbd_device": "/dev/nbd11", 00:06:28.043 "bdev_name": "Nvme2n1" 00:06:28.043 }, 00:06:28.043 { 00:06:28.043 "nbd_device": "/dev/nbd12", 00:06:28.043 "bdev_name": "Nvme2n2" 00:06:28.043 }, 00:06:28.043 { 00:06:28.043 "nbd_device": "/dev/nbd13", 00:06:28.043 "bdev_name": "Nvme2n3" 00:06:28.043 }, 00:06:28.043 { 00:06:28.043 "nbd_device": "/dev/nbd14", 00:06:28.043 "bdev_name": "Nvme3n1" 00:06:28.043 } 00:06:28.043 ]' 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.043 { 00:06:28.043 "nbd_device": "/dev/nbd0", 00:06:28.043 "bdev_name": "Nvme0n1" 00:06:28.043 }, 00:06:28.043 { 00:06:28.043 "nbd_device": "/dev/nbd1", 00:06:28.043 "bdev_name": "Nvme1n1p1" 00:06:28.043 }, 00:06:28.043 { 00:06:28.043 "nbd_device": "/dev/nbd10", 00:06:28.043 "bdev_name": "Nvme1n1p2" 00:06:28.043 }, 00:06:28.043 { 00:06:28.043 "nbd_device": "/dev/nbd11", 00:06:28.043 "bdev_name": "Nvme2n1" 00:06:28.043 }, 00:06:28.043 { 00:06:28.043 "nbd_device": "/dev/nbd12", 00:06:28.043 "bdev_name": "Nvme2n2" 00:06:28.043 }, 00:06:28.043 { 00:06:28.043 "nbd_device": "/dev/nbd13", 00:06:28.043 "bdev_name": "Nvme2n3" 00:06:28.043 }, 00:06:28.043 { 00:06:28.043 "nbd_device": "/dev/nbd14", 00:06:28.043 "bdev_name": "Nvme3n1" 00:06:28.043 } 00:06:28.043 ]' 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.043 /dev/nbd1 00:06:28.043 /dev/nbd10 00:06:28.043 /dev/nbd11 00:06:28.043 /dev/nbd12 00:06:28.043 /dev/nbd13 00:06:28.043 /dev/nbd14' 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.043 /dev/nbd1 00:06:28.043 /dev/nbd10 00:06:28.043 /dev/nbd11 00:06:28.043 /dev/nbd12 00:06:28.043 /dev/nbd13 00:06:28.043 /dev/nbd14' 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:28.043 256+0 records in 00:06:28.043 256+0 records out 00:06:28.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422575 s, 248 MB/s 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.043 256+0 records in 00:06:28.043 256+0 records out 00:06:28.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0771731 s, 13.6 MB/s 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.043 256+0 records in 00:06:28.043 256+0 records out 00:06:28.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0748632 s, 14.0 MB/s 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.043 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:28.301 256+0 records in 00:06:28.301 256+0 records out 00:06:28.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.076144 s, 13.8 MB/s 00:06:28.301 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.301 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:28.301 256+0 records in 00:06:28.301 256+0 records out 00:06:28.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0826982 s, 12.7 MB/s 00:06:28.301 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.301 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:28.301 256+0 records in 00:06:28.301 256+0 records out 00:06:28.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0744442 s, 14.1 MB/s 00:06:28.301 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.301 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:28.559 256+0 records in 00:06:28.559 256+0 records out 00:06:28.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0830354 s, 12.6 MB/s 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:28.559 256+0 records in 00:06:28.559 256+0 records out 00:06:28.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0772653 s, 13.6 MB/s 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.559 02:54:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.869 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.869 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.869 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.869 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.869 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.869 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.869 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:28.869 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.869 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.869 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.141 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:29.400 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:29.400 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:29.400 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:29.400 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.400 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.400 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:29.400 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.400 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.400 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.400 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:29.657 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:29.657 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:29.657 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:29.657 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.657 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.657 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:29.657 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.657 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.657 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.657 02:54:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:29.914 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:29.914 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:29.914 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:29.914 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.914 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.914 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:29.914 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.914 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.914 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.914 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:30.171 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:30.171 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:30.171 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:30.171 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.171 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.171 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:30.171 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.171 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.171 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.171 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.171 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:30.429 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:30.429 malloc_lvol_verify 00:06:30.686 02:54:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:30.686 39bee4c0-4d7e-4140-9d7f-201589daa705 00:06:30.686 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:30.942 22785674-b356-4480-b902-a0d57c70e762 00:06:30.942 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:31.199 /dev/nbd0 00:06:31.199 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:31.199 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:31.199 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:31.199 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:31.199 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:31.199 mke2fs 1.47.0 (5-Feb-2023) 00:06:31.199 Discarding device blocks: 0/4096 done 00:06:31.199 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:31.199 00:06:31.199 Allocating group tables: 0/1 done 00:06:31.199 Writing inode tables: 0/1 done 00:06:31.199 Creating journal (1024 blocks): done 00:06:31.199 Writing superblocks and filesystem accounting information: 0/1 done 00:06:31.199 00:06:31.199 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:31.199 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.199 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:31.199 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.199 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:31.199 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.199 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61448 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61448 ']' 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61448 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61448 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.505 killing process with pid 61448 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61448' 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61448 00:06:31.505 02:54:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61448 00:06:32.438 02:54:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:32.438 00:06:32.438 real 0m10.890s 00:06:32.438 user 0m15.546s 00:06:32.438 sys 0m3.538s 00:06:32.438 ************************************ 00:06:32.438 END TEST bdev_nbd 00:06:32.438 ************************************ 00:06:32.438 02:54:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.438 02:54:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:32.438 02:54:26 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:32.438 02:54:26 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:06:32.438 skipping fio tests on NVMe due to multi-ns failures. 00:06:32.438 02:54:26 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:06:32.438 02:54:26 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:32.438 02:54:26 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:32.438 02:54:26 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:32.438 02:54:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:32.438 02:54:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.438 02:54:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.438 ************************************ 00:06:32.438 START TEST bdev_verify 00:06:32.438 ************************************ 00:06:32.438 02:54:26 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:32.438 [2024-12-10 02:54:26.585424] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:32.438 [2024-12-10 02:54:26.585591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61863 ] 00:06:32.438 [2024-12-10 02:54:26.746475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.696 [2024-12-10 02:54:26.846230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.696 [2024-12-10 02:54:26.846232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.261 Running I/O for 5 seconds... 00:06:35.568 20800.00 IOPS, 81.25 MiB/s [2024-12-10T02:54:30.890Z] 21792.00 IOPS, 85.12 MiB/s [2024-12-10T02:54:31.822Z] 21141.33 IOPS, 82.58 MiB/s [2024-12-10T02:54:32.754Z] 20864.00 IOPS, 81.50 MiB/s [2024-12-10T02:54:32.754Z] 20902.40 IOPS, 81.65 MiB/s 00:06:38.366 Latency(us) 00:06:38.366 [2024-12-10T02:54:32.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.366 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.366 Verification LBA range: start 0x0 length 0xbd0bd 00:06:38.366 Nvme0n1 : 5.13 1447.62 5.65 0.00 0.00 88213.14 13913.80 179871.11 00:06:38.366 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.366 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:38.366 Nvme0n1 : 5.06 1493.42 5.83 0.00 0.00 85466.47 16535.24 84289.38 00:06:38.366 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.366 Verification LBA range: start 0x0 length 0x4ff80 00:06:38.366 Nvme1n1p1 : 5.13 1447.21 5.65 0.00 0.00 88079.14 13913.80 170191.95 00:06:38.366 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.366 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:38.366 Nvme1n1p1 : 5.06 1492.99 5.83 0.00 0.00 85272.31 16434.41 75820.11 00:06:38.366 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.366 Verification LBA range: start 0x0 length 0x4ff7f 00:06:38.366 Nvme1n1p2 : 5.13 1446.80 5.65 0.00 0.00 87932.47 12199.78 159706.19 00:06:38.366 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.366 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:38.366 Nvme1n1p2 : 5.06 1492.56 5.83 0.00 0.00 85120.09 15526.99 69770.63 00:06:38.366 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.366 Verification LBA range: start 0x0 length 0x80000 00:06:38.366 Nvme2n1 : 5.13 1446.43 5.65 0.00 0.00 87799.30 12653.49 156479.80 00:06:38.366 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.366 Verification LBA range: start 0x80000 length 0x80000 00:06:38.366 Nvme2n1 : 5.06 1492.16 5.83 0.00 0.00 84964.49 14619.57 70980.53 00:06:38.366 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.366 Verification LBA range: start 0x0 length 0x80000 00:06:38.366 Nvme2n2 : 5.14 1445.61 5.65 0.00 0.00 87652.26 14518.74 163739.18 00:06:38.366 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.366 Verification LBA range: start 0x80000 length 0x80000 00:06:38.366 Nvme2n2 : 5.06 1491.77 5.83 0.00 0.00 84797.14 13712.15 74610.22 00:06:38.366 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.366 Verification LBA range: start 0x0 length 0x80000 00:06:38.366 Nvme2n3 : 5.14 1445.24 5.65 0.00 0.00 87477.49 14821.22 169385.35 00:06:38.366 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.366 Verification LBA range: start 0x80000 length 0x80000 00:06:38.366 Nvme2n3 : 5.08 1511.34 5.90 0.00 0.00 83609.10 6351.95 76223.41 00:06:38.366 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.366 Verification LBA range: start 0x0 length 0x20000 00:06:38.366 Nvme3n1 : 5.14 1444.87 5.64 0.00 0.00 87312.62 10284.11 174224.94 00:06:38.366 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.366 Verification LBA range: start 0x20000 length 0x20000 00:06:38.366 Nvme3n1 : 5.08 1510.93 5.90 0.00 0.00 83462.16 4814.38 78239.90 00:06:38.366 [2024-12-10T02:54:32.754Z] =================================================================================================================== 00:06:38.366 [2024-12-10T02:54:32.754Z] Total : 20608.95 80.50 0.00 0.00 86205.77 4814.38 179871.11 00:06:39.737 00:06:39.737 real 0m7.353s 00:06:39.737 user 0m13.819s 00:06:39.737 sys 0m0.204s 00:06:39.737 02:54:33 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.737 02:54:33 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:39.737 ************************************ 00:06:39.737 END TEST bdev_verify 00:06:39.737 ************************************ 00:06:39.737 02:54:33 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:39.737 02:54:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:39.737 02:54:33 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.737 02:54:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.737 ************************************ 00:06:39.737 START TEST bdev_verify_big_io 00:06:39.737 ************************************ 00:06:39.737 02:54:33 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:39.737 [2024-12-10 02:54:33.980491] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:39.737 [2024-12-10 02:54:33.980605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61956 ] 00:06:39.995 [2024-12-10 02:54:34.139114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.995 [2024-12-10 02:54:34.238234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.995 [2024-12-10 02:54:34.238252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.560 Running I/O for 5 seconds... 00:06:45.613 560.00 IOPS, 35.00 MiB/s [2024-12-10T02:54:40.932Z] 1502.00 IOPS, 93.88 MiB/s [2024-12-10T02:54:41.497Z] 2011.67 IOPS, 125.73 MiB/s 00:06:47.109 Latency(us) 00:06:47.109 [2024-12-10T02:54:41.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:47.109 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:47.109 Verification LBA range: start 0x0 length 0xbd0b 00:06:47.109 Nvme0n1 : 5.96 103.30 6.46 0.00 0.00 1177997.16 14417.92 1309913.40 00:06:47.109 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:47.109 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:47.109 Nvme0n1 : 5.85 97.09 6.07 0.00 0.00 1238385.71 10687.41 1729343.80 00:06:47.109 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:47.109 Verification LBA range: start 0x0 length 0x4ff8 00:06:47.109 Nvme1n1p1 : 5.85 99.36 6.21 0.00 0.00 1188927.68 62511.26 1400252.26 00:06:47.109 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:47.109 Verification LBA range: start 0x4ff8 length 0x4ff8 00:06:47.109 Nvme1n1p1 : 5.85 99.95 6.25 0.00 0.00 1178107.02 29642.44 1509949.44 00:06:47.109 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:47.109 Verification LBA range: start 0x0 length 0x4ff7 00:06:47.109 Nvme1n1p2 : 6.11 70.65 4.42 0.00 0.00 1621619.24 146800.64 2297188.04 00:06:47.109 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:47.109 Verification LBA range: start 0x4ff7 length 0x4ff7 00:06:47.109 Nvme1n1p2 : 5.96 110.56 6.91 0.00 0.00 1037365.04 51622.20 955010.76 00:06:47.109 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:47.109 Verification LBA range: start 0x0 length 0x8000 00:06:47.109 Nvme2n1 : 6.06 107.47 6.72 0.00 0.00 1042405.72 113730.17 1193763.45 00:06:47.109 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:47.109 Verification LBA range: start 0x8000 length 0x8000 00:06:47.109 Nvme2n1 : 6.05 108.04 6.75 0.00 0.00 1030165.18 90338.86 1832588.21 00:06:47.109 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:47.109 Verification LBA range: start 0x0 length 0x8000 00:06:47.109 Nvme2n2 : 6.12 114.93 7.18 0.00 0.00 949341.63 56058.49 1213121.77 00:06:47.109 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:47.109 Verification LBA range: start 0x8000 length 0x8000 00:06:47.109 Nvme2n2 : 6.16 111.88 6.99 0.00 0.00 957778.53 68964.04 1858399.31 00:06:47.109 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:47.109 Verification LBA range: start 0x0 length 0x8000 00:06:47.109 Nvme2n3 : 6.18 120.21 7.51 0.00 0.00 877904.62 31658.93 1142141.24 00:06:47.109 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:47.109 Verification LBA range: start 0x8000 length 0x8000 00:06:47.109 Nvme2n3 : 6.19 121.58 7.60 0.00 0.00 857427.21 24298.73 1871304.86 00:06:47.109 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:47.109 Verification LBA range: start 0x0 length 0x2000 00:06:47.109 Nvme3n1 : 6.19 133.97 8.37 0.00 0.00 766514.93 1714.02 1213121.77 00:06:47.109 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:47.109 Verification LBA range: start 0x2000 length 0x2000 00:06:47.109 Nvme3n1 : 6.25 150.44 9.40 0.00 0.00 672948.89 582.89 1690627.15 00:06:47.109 [2024-12-10T02:54:41.497Z] =================================================================================================================== 00:06:47.109 [2024-12-10T02:54:41.497Z] Total : 1549.45 96.84 0.00 0.00 1003816.63 582.89 2297188.04 00:06:48.482 00:06:48.482 real 0m8.681s 00:06:48.482 user 0m16.435s 00:06:48.482 sys 0m0.230s 00:06:48.482 02:54:42 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.482 ************************************ 00:06:48.482 END TEST bdev_verify_big_io 00:06:48.482 ************************************ 00:06:48.482 02:54:42 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:48.482 02:54:42 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:48.482 02:54:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:48.482 02:54:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.482 02:54:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.482 ************************************ 00:06:48.482 START TEST bdev_write_zeroes 00:06:48.482 ************************************ 00:06:48.482 02:54:42 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:48.482 [2024-12-10 02:54:42.705340] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:48.482 [2024-12-10 02:54:42.705476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62065 ] 00:06:48.482 [2024-12-10 02:54:42.862192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.741 [2024-12-10 02:54:42.959154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.306 Running I/O for 1 seconds... 00:06:50.265 52511.00 IOPS, 205.12 MiB/s 00:06:50.265 Latency(us) 00:06:50.265 [2024-12-10T02:54:44.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.265 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.265 Nvme0n1 : 1.02 7480.86 29.22 0.00 0.00 17074.28 7763.50 271016.57 00:06:50.265 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.265 Nvme1n1p1 : 1.02 7629.07 29.80 0.00 0.00 16714.05 8368.44 256497.82 00:06:50.265 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.265 Nvme1n1p2 : 1.02 7494.87 29.28 0.00 0.00 16975.47 10687.41 264563.79 00:06:50.265 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.265 Nvme2n1 : 1.03 7486.35 29.24 0.00 0.00 16965.93 10889.06 264563.79 00:06:50.265 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.265 Nvme2n2 : 1.03 7477.97 29.21 0.00 0.00 16961.34 11040.30 264563.79 00:06:50.265 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.265 Nvme2n3 : 1.03 7469.60 29.18 0.00 0.00 16955.81 10989.88 262950.60 00:06:50.265 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.265 Nvme3n1 : 1.03 7461.24 29.15 0.00 0.00 16955.95 10132.87 262950.60 00:06:50.265 [2024-12-10T02:54:44.653Z] =================================================================================================================== 00:06:50.265 [2024-12-10T02:54:44.653Z] Total : 52499.95 205.08 0.00 0.00 16942.64 7763.50 271016.57 00:06:51.200 00:06:51.200 real 0m2.726s 00:06:51.200 user 0m2.429s 00:06:51.200 sys 0m0.182s 00:06:51.200 02:54:45 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.200 ************************************ 00:06:51.200 END TEST bdev_write_zeroes 00:06:51.200 ************************************ 00:06:51.200 02:54:45 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:51.200 02:54:45 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.200 02:54:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:51.200 02:54:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.200 02:54:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.200 ************************************ 00:06:51.200 START TEST bdev_json_nonenclosed 00:06:51.200 ************************************ 00:06:51.200 02:54:45 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.200 [2024-12-10 02:54:45.472237] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:51.200 [2024-12-10 02:54:45.472356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62118 ] 00:06:51.457 [2024-12-10 02:54:45.633458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.457 [2024-12-10 02:54:45.733746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.457 [2024-12-10 02:54:45.733828] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:51.458 [2024-12-10 02:54:45.733845] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:51.458 [2024-12-10 02:54:45.733854] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.715 00:06:51.715 real 0m0.499s 00:06:51.715 user 0m0.312s 00:06:51.715 sys 0m0.083s 00:06:51.715 02:54:45 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.715 ************************************ 00:06:51.715 END TEST bdev_json_nonenclosed 00:06:51.715 ************************************ 00:06:51.715 02:54:45 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:51.715 02:54:45 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.715 02:54:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:51.715 02:54:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.715 02:54:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.715 ************************************ 00:06:51.715 START TEST bdev_json_nonarray 00:06:51.715 ************************************ 00:06:51.715 02:54:45 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.715 [2024-12-10 02:54:46.002607] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:51.715 [2024-12-10 02:54:46.002698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62148 ] 00:06:51.972 [2024-12-10 02:54:46.158354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.972 [2024-12-10 02:54:46.259265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.972 [2024-12-10 02:54:46.259356] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:51.972 [2024-12-10 02:54:46.259384] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:51.972 [2024-12-10 02:54:46.259394] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.230 00:06:52.230 real 0m0.493s 00:06:52.230 user 0m0.294s 00:06:52.230 sys 0m0.094s 00:06:52.230 02:54:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.230 02:54:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:52.230 ************************************ 00:06:52.230 END TEST bdev_json_nonarray 00:06:52.230 ************************************ 00:06:52.230 02:54:46 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:06:52.230 02:54:46 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:06:52.230 02:54:46 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:06:52.230 02:54:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.230 02:54:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.230 02:54:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.230 ************************************ 00:06:52.230 START TEST bdev_gpt_uuid 00:06:52.230 ************************************ 00:06:52.230 02:54:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:06:52.230 02:54:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:06:52.230 02:54:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:06:52.230 02:54:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62169 00:06:52.230 02:54:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:52.230 02:54:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62169 00:06:52.230 02:54:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62169 ']' 00:06:52.230 02:54:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.230 02:54:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.230 02:54:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.230 02:54:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.230 02:54:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:52.230 02:54:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:52.230 [2024-12-10 02:54:46.560712] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:52.230 [2024-12-10 02:54:46.560833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62169 ] 00:06:52.489 [2024-12-10 02:54:46.712150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.489 [2024-12-10 02:54:46.812488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.078 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.078 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:06:53.078 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:53.078 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.078 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:53.643 Some configs were skipped because the RPC state that can call them passed over. 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:06:53.644 { 00:06:53.644 "name": "Nvme1n1p1", 00:06:53.644 "aliases": [ 00:06:53.644 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:06:53.644 ], 00:06:53.644 "product_name": "GPT Disk", 00:06:53.644 "block_size": 4096, 00:06:53.644 "num_blocks": 655104, 00:06:53.644 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:53.644 "assigned_rate_limits": { 00:06:53.644 "rw_ios_per_sec": 0, 00:06:53.644 "rw_mbytes_per_sec": 0, 00:06:53.644 "r_mbytes_per_sec": 0, 00:06:53.644 "w_mbytes_per_sec": 0 00:06:53.644 }, 00:06:53.644 "claimed": false, 00:06:53.644 "zoned": false, 00:06:53.644 "supported_io_types": { 00:06:53.644 "read": true, 00:06:53.644 "write": true, 00:06:53.644 "unmap": true, 00:06:53.644 "flush": true, 00:06:53.644 "reset": true, 00:06:53.644 "nvme_admin": false, 00:06:53.644 "nvme_io": false, 00:06:53.644 "nvme_io_md": false, 00:06:53.644 "write_zeroes": true, 00:06:53.644 "zcopy": false, 00:06:53.644 "get_zone_info": false, 00:06:53.644 "zone_management": false, 00:06:53.644 "zone_append": false, 00:06:53.644 "compare": true, 00:06:53.644 "compare_and_write": false, 00:06:53.644 "abort": true, 00:06:53.644 "seek_hole": false, 00:06:53.644 "seek_data": false, 00:06:53.644 "copy": true, 00:06:53.644 "nvme_iov_md": false 00:06:53.644 }, 00:06:53.644 "driver_specific": { 00:06:53.644 "gpt": { 00:06:53.644 "base_bdev": "Nvme1n1", 00:06:53.644 "offset_blocks": 256, 00:06:53.644 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:06:53.644 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:53.644 "partition_name": "SPDK_TEST_first" 00:06:53.644 } 00:06:53.644 } 00:06:53.644 } 00:06:53.644 ]' 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:06:53.644 { 00:06:53.644 "name": "Nvme1n1p2", 00:06:53.644 "aliases": [ 00:06:53.644 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:06:53.644 ], 00:06:53.644 "product_name": "GPT Disk", 00:06:53.644 "block_size": 4096, 00:06:53.644 "num_blocks": 655103, 00:06:53.644 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:53.644 "assigned_rate_limits": { 00:06:53.644 "rw_ios_per_sec": 0, 00:06:53.644 "rw_mbytes_per_sec": 0, 00:06:53.644 "r_mbytes_per_sec": 0, 00:06:53.644 "w_mbytes_per_sec": 0 00:06:53.644 }, 00:06:53.644 "claimed": false, 00:06:53.644 "zoned": false, 00:06:53.644 "supported_io_types": { 00:06:53.644 "read": true, 00:06:53.644 "write": true, 00:06:53.644 "unmap": true, 00:06:53.644 "flush": true, 00:06:53.644 "reset": true, 00:06:53.644 "nvme_admin": false, 00:06:53.644 "nvme_io": false, 00:06:53.644 "nvme_io_md": false, 00:06:53.644 "write_zeroes": true, 00:06:53.644 "zcopy": false, 00:06:53.644 "get_zone_info": false, 00:06:53.644 "zone_management": false, 00:06:53.644 "zone_append": false, 00:06:53.644 "compare": true, 00:06:53.644 "compare_and_write": false, 00:06:53.644 "abort": true, 00:06:53.644 "seek_hole": false, 00:06:53.644 "seek_data": false, 00:06:53.644 "copy": true, 00:06:53.644 "nvme_iov_md": false 00:06:53.644 }, 00:06:53.644 "driver_specific": { 00:06:53.644 "gpt": { 00:06:53.644 "base_bdev": "Nvme1n1", 00:06:53.644 "offset_blocks": 655360, 00:06:53.644 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:06:53.644 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:53.644 "partition_name": "SPDK_TEST_second" 00:06:53.644 } 00:06:53.644 } 00:06:53.644 } 00:06:53.644 ]' 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62169 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62169 ']' 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62169 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62169 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.644 killing process with pid 62169 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62169' 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62169 00:06:53.644 02:54:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62169 00:06:55.540 00:06:55.540 real 0m3.007s 00:06:55.540 user 0m3.181s 00:06:55.540 sys 0m0.343s 00:06:55.541 02:54:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.541 02:54:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:55.541 ************************************ 00:06:55.541 END TEST bdev_gpt_uuid 00:06:55.541 ************************************ 00:06:55.541 02:54:49 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:06:55.541 02:54:49 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:55.541 02:54:49 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:06:55.541 02:54:49 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:55.541 02:54:49 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:55.541 02:54:49 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:06:55.541 02:54:49 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:06:55.541 02:54:49 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:06:55.541 02:54:49 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:55.541 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:55.798 Waiting for block devices as requested 00:06:55.798 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:55.798 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:55.798 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:56.055 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:01.313 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:01.313 02:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:01.313 02:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:01.313 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:01.313 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:01.313 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:01.313 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:01.313 02:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:01.313 00:07:01.313 real 0m56.349s 00:07:01.313 user 1m13.656s 00:07:01.313 sys 0m7.579s 00:07:01.313 02:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.313 02:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:01.313 ************************************ 00:07:01.313 END TEST blockdev_nvme_gpt 00:07:01.313 ************************************ 00:07:01.313 02:54:55 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:01.313 02:54:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.313 02:54:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.313 02:54:55 -- common/autotest_common.sh@10 -- # set +x 00:07:01.313 ************************************ 00:07:01.313 START TEST nvme 00:07:01.313 ************************************ 00:07:01.313 02:54:55 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:01.313 * Looking for test storage... 00:07:01.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:01.313 02:54:55 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:01.313 02:54:55 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:01.313 02:54:55 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:01.313 02:54:55 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:01.313 02:54:55 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.313 02:54:55 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.313 02:54:55 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.313 02:54:55 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.313 02:54:55 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.313 02:54:55 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.313 02:54:55 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.313 02:54:55 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.313 02:54:55 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.313 02:54:55 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.313 02:54:55 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.313 02:54:55 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:01.313 02:54:55 nvme -- scripts/common.sh@345 -- # : 1 00:07:01.313 02:54:55 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.313 02:54:55 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.313 02:54:55 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:01.313 02:54:55 nvme -- scripts/common.sh@353 -- # local d=1 00:07:01.313 02:54:55 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.313 02:54:55 nvme -- scripts/common.sh@355 -- # echo 1 00:07:01.313 02:54:55 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.313 02:54:55 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:01.313 02:54:55 nvme -- scripts/common.sh@353 -- # local d=2 00:07:01.313 02:54:55 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.313 02:54:55 nvme -- scripts/common.sh@355 -- # echo 2 00:07:01.313 02:54:55 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.313 02:54:55 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.313 02:54:55 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.313 02:54:55 nvme -- scripts/common.sh@368 -- # return 0 00:07:01.313 02:54:55 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.313 02:54:55 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:01.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.313 --rc genhtml_branch_coverage=1 00:07:01.313 --rc genhtml_function_coverage=1 00:07:01.313 --rc genhtml_legend=1 00:07:01.313 --rc geninfo_all_blocks=1 00:07:01.313 --rc geninfo_unexecuted_blocks=1 00:07:01.313 00:07:01.313 ' 00:07:01.313 02:54:55 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:01.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.313 --rc genhtml_branch_coverage=1 00:07:01.313 --rc genhtml_function_coverage=1 00:07:01.313 --rc genhtml_legend=1 00:07:01.313 --rc geninfo_all_blocks=1 00:07:01.313 --rc geninfo_unexecuted_blocks=1 00:07:01.313 00:07:01.313 ' 00:07:01.313 02:54:55 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:01.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.314 --rc genhtml_branch_coverage=1 00:07:01.314 --rc genhtml_function_coverage=1 00:07:01.314 --rc genhtml_legend=1 00:07:01.314 --rc geninfo_all_blocks=1 00:07:01.314 --rc geninfo_unexecuted_blocks=1 00:07:01.314 00:07:01.314 ' 00:07:01.314 02:54:55 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:01.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.314 --rc genhtml_branch_coverage=1 00:07:01.314 --rc genhtml_function_coverage=1 00:07:01.314 --rc genhtml_legend=1 00:07:01.314 --rc geninfo_all_blocks=1 00:07:01.314 --rc geninfo_unexecuted_blocks=1 00:07:01.314 00:07:01.314 ' 00:07:01.314 02:54:55 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:01.878 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:02.443 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:02.443 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:02.443 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:02.443 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:02.443 02:54:56 nvme -- nvme/nvme.sh@79 -- # uname 00:07:02.443 02:54:56 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:02.443 02:54:56 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:02.443 02:54:56 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:02.443 02:54:56 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:02.443 02:54:56 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:02.443 02:54:56 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:02.443 02:54:56 nvme -- common/autotest_common.sh@1075 -- # stubpid=62803 00:07:02.443 Waiting for stub to ready for secondary processes... 00:07:02.443 02:54:56 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:02.443 02:54:56 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:02.443 02:54:56 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:02.443 02:54:56 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62803 ]] 00:07:02.443 02:54:56 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:02.443 [2024-12-10 02:54:56.692585] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:02.443 [2024-12-10 02:54:56.692705] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:03.375 [2024-12-10 02:54:57.455873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.375 [2024-12-10 02:54:57.550686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.375 [2024-12-10 02:54:57.551047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.375 [2024-12-10 02:54:57.551061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.375 [2024-12-10 02:54:57.564614] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:03.375 [2024-12-10 02:54:57.564717] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:03.375 [2024-12-10 02:54:57.574255] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:03.375 [2024-12-10 02:54:57.574419] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:03.375 [2024-12-10 02:54:57.577194] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:03.375 [2024-12-10 02:54:57.577486] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:03.375 [2024-12-10 02:54:57.577569] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:03.375 [2024-12-10 02:54:57.580411] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:03.375 [2024-12-10 02:54:57.580624] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:03.375 [2024-12-10 02:54:57.580668] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:03.375 [2024-12-10 02:54:57.582229] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:03.375 [2024-12-10 02:54:57.582413] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:03.375 [2024-12-10 02:54:57.582462] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:03.375 [2024-12-10 02:54:57.582491] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:03.375 [2024-12-10 02:54:57.582517] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:03.375 02:54:57 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:03.375 02:54:57 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:03.375 done. 00:07:03.375 02:54:57 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:03.375 02:54:57 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:03.375 02:54:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.375 02:54:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.375 ************************************ 00:07:03.375 START TEST nvme_reset 00:07:03.375 ************************************ 00:07:03.375 02:54:57 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:03.632 Initializing NVMe Controllers 00:07:03.632 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:03.632 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:03.632 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:03.632 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:03.632 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:03.632 00:07:03.632 real 0m0.218s 00:07:03.633 user 0m0.069s 00:07:03.633 sys 0m0.104s 00:07:03.633 02:54:57 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.633 02:54:57 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:03.633 ************************************ 00:07:03.633 END TEST nvme_reset 00:07:03.633 ************************************ 00:07:03.633 02:54:57 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:03.633 02:54:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.633 02:54:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.633 02:54:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.633 ************************************ 00:07:03.633 START TEST nvme_identify 00:07:03.633 ************************************ 00:07:03.633 02:54:57 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:03.633 02:54:57 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:03.633 02:54:57 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:03.633 02:54:57 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:03.633 02:54:57 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:03.633 02:54:57 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:03.633 02:54:57 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:03.633 02:54:57 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:03.633 02:54:57 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:03.633 02:54:57 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:03.633 02:54:57 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:03.633 02:54:57 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:03.633 02:54:57 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:03.893 [2024-12-10 02:54:58.165731] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62824 terminated unexpected 00:07:03.893 ===================================================== 00:07:03.893 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:03.893 ===================================================== 00:07:03.893 Controller Capabilities/Features 00:07:03.893 ================================ 00:07:03.893 Vendor ID: 1b36 00:07:03.893 Subsystem Vendor ID: 1af4 00:07:03.893 Serial Number: 12340 00:07:03.893 Model Number: QEMU NVMe Ctrl 00:07:03.893 Firmware Version: 8.0.0 00:07:03.893 Recommended Arb Burst: 6 00:07:03.893 IEEE OUI Identifier: 00 54 52 00:07:03.893 Multi-path I/O 00:07:03.893 May have multiple subsystem ports: No 00:07:03.893 May have multiple controllers: No 00:07:03.893 Associated with SR-IOV VF: No 00:07:03.893 Max Data Transfer Size: 524288 00:07:03.893 Max Number of Namespaces: 256 00:07:03.893 Max Number of I/O Queues: 64 00:07:03.893 NVMe Specification Version (VS): 1.4 00:07:03.893 NVMe Specification Version (Identify): 1.4 00:07:03.893 Maximum Queue Entries: 2048 00:07:03.893 Contiguous Queues Required: Yes 00:07:03.893 Arbitration Mechanisms Supported 00:07:03.893 Weighted Round Robin: Not Supported 00:07:03.893 Vendor Specific: Not Supported 00:07:03.893 Reset Timeout: 7500 ms 00:07:03.893 Doorbell Stride: 4 bytes 00:07:03.893 NVM Subsystem Reset: Not Supported 00:07:03.893 Command Sets Supported 00:07:03.893 NVM Command Set: Supported 00:07:03.893 Boot Partition: Not Supported 00:07:03.893 Memory Page Size Minimum: 4096 bytes 00:07:03.893 Memory Page Size Maximum: 65536 bytes 00:07:03.893 Persistent Memory Region: Not Supported 00:07:03.893 Optional Asynchronous Events Supported 00:07:03.893 Namespace Attribute Notices: Supported 00:07:03.893 Firmware Activation Notices: Not Supported 00:07:03.893 ANA Change Notices: Not Supported 00:07:03.893 PLE Aggregate Log Change Notices: Not Supported 00:07:03.893 LBA Status Info Alert Notices: Not Supported 00:07:03.893 EGE Aggregate Log Change Notices: Not Supported 00:07:03.894 Normal NVM Subsystem Shutdown event: Not Supported 00:07:03.894 Zone Descriptor Change Notices: Not Supported 00:07:03.894 Discovery Log Change Notices: Not Supported 00:07:03.894 Controller Attributes 00:07:03.894 128-bit Host Identifier: Not Supported 00:07:03.894 Non-Operational Permissive Mode: Not Supported 00:07:03.894 NVM Sets: Not Supported 00:07:03.894 Read Recovery Levels: Not Supported 00:07:03.894 Endurance Groups: Not Supported 00:07:03.894 Predictable Latency Mode: Not Supported 00:07:03.894 Traffic Based Keep ALive: Not Supported 00:07:03.894 Namespace Granularity: Not Supported 00:07:03.894 SQ Associations: Not Supported 00:07:03.894 UUID List: Not Supported 00:07:03.894 Multi-Domain Subsystem: Not Supported 00:07:03.894 Fixed Capacity Management: Not Supported 00:07:03.894 Variable Capacity Management: Not Supported 00:07:03.894 Delete Endurance Group: Not Supported 00:07:03.894 Delete NVM Set: Not Supported 00:07:03.894 Extended LBA Formats Supported: Supported 00:07:03.894 Flexible Data Placement Supported: Not Supported 00:07:03.894 00:07:03.894 Controller Memory Buffer Support 00:07:03.894 ================================ 00:07:03.894 Supported: No 00:07:03.894 00:07:03.894 Persistent Memory Region Support 00:07:03.894 ================================ 00:07:03.894 Supported: No 00:07:03.894 00:07:03.894 Admin Command Set Attributes 00:07:03.894 ============================ 00:07:03.894 Security Send/Receive: Not Supported 00:07:03.894 Format NVM: Supported 00:07:03.894 Firmware Activate/Download: Not Supported 00:07:03.894 Namespace Management: Supported 00:07:03.894 Device Self-Test: Not Supported 00:07:03.894 Directives: Supported 00:07:03.894 NVMe-MI: Not Supported 00:07:03.894 Virtualization Management: Not Supported 00:07:03.894 Doorbell Buffer Config: Supported 00:07:03.894 Get LBA Status Capability: Not Supported 00:07:03.894 Command & Feature Lockdown Capability: Not Supported 00:07:03.894 Abort Command Limit: 4 00:07:03.894 Async Event Request Limit: 4 00:07:03.894 Number of Firmware Slots: N/A 00:07:03.894 Firmware Slot 1 Read-Only: N/A 00:07:03.894 Firmware Activation Without Reset: N/A 00:07:03.894 Multiple Update Detection Support: N/A 00:07:03.894 Firmware Update Granularity: No Information Provided 00:07:03.894 Per-Namespace SMART Log: Yes 00:07:03.894 Asymmetric Namespace Access Log Page: Not Supported 00:07:03.894 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:03.894 Command Effects Log Page: Supported 00:07:03.894 Get Log Page Extended Data: Supported 00:07:03.894 Telemetry Log Pages: Not Supported 00:07:03.894 Persistent Event Log Pages: Not Supported 00:07:03.894 Supported Log Pages Log Page: May Support 00:07:03.894 Commands Supported & Effects Log Page: Not Supported 00:07:03.894 Feature Identifiers & Effects Log Page:May Support 00:07:03.894 NVMe-MI Commands & Effects Log Page: May Support 00:07:03.894 Data Area 4 for Telemetry Log: Not Supported 00:07:03.894 Error Log Page Entries Supported: 1 00:07:03.894 Keep Alive: Not Supported 00:07:03.894 00:07:03.894 NVM Command Set Attributes 00:07:03.894 ========================== 00:07:03.894 Submission Queue Entry Size 00:07:03.894 Max: 64 00:07:03.894 Min: 64 00:07:03.894 Completion Queue Entry Size 00:07:03.894 Max: 16 00:07:03.894 Min: 16 00:07:03.894 Number of Namespaces: 256 00:07:03.894 Compare Command: Supported 00:07:03.894 Write Uncorrectable Command: Not Supported 00:07:03.894 Dataset Management Command: Supported 00:07:03.894 Write Zeroes Command: Supported 00:07:03.894 Set Features Save Field: Supported 00:07:03.894 Reservations: Not Supported 00:07:03.894 Timestamp: Supported 00:07:03.894 Copy: Supported 00:07:03.894 Volatile Write Cache: Present 00:07:03.894 Atomic Write Unit (Normal): 1 00:07:03.894 Atomic Write Unit (PFail): 1 00:07:03.894 Atomic Compare & Write Unit: 1 00:07:03.894 Fused Compare & Write: Not Supported 00:07:03.894 Scatter-Gather List 00:07:03.894 SGL Command Set: Supported 00:07:03.894 SGL Keyed: Not Supported 00:07:03.894 SGL Bit Bucket Descriptor: Not Supported 00:07:03.894 SGL Metadata Pointer: Not Supported 00:07:03.894 Oversized SGL: Not Supported 00:07:03.894 SGL Metadata Address: Not Supported 00:07:03.894 SGL Offset: Not Supported 00:07:03.894 Transport SGL Data Block: Not Supported 00:07:03.894 Replay Protected Memory Block: Not Supported 00:07:03.894 00:07:03.894 Firmware Slot Information 00:07:03.894 ========================= 00:07:03.894 Active slot: 1 00:07:03.894 Slot 1 Firmware Revision: 1.0 00:07:03.894 00:07:03.894 00:07:03.894 Commands Supported and Effects 00:07:03.894 ============================== 00:07:03.894 Admin Commands 00:07:03.894 -------------- 00:07:03.894 Delete I/O Submission Queue (00h): Supported 00:07:03.894 Create I/O Submission Queue (01h): Supported 00:07:03.894 Get Log Page (02h): Supported 00:07:03.894 Delete I/O Completion Queue (04h): Supported 00:07:03.894 Create I/O Completion Queue (05h): Supported 00:07:03.894 Identify (06h): Supported 00:07:03.894 Abort (08h): Supported 00:07:03.894 Set Features (09h): Supported 00:07:03.894 Get Features (0Ah): Supported 00:07:03.894 Asynchronous Event Request (0Ch): Supported 00:07:03.894 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:03.894 Directive Send (19h): Supported 00:07:03.894 Directive Receive (1Ah): Supported 00:07:03.894 Virtualization Management (1Ch): Supported 00:07:03.894 Doorbell Buffer Config (7Ch): Supported 00:07:03.894 Format NVM (80h): Supported LBA-Change 00:07:03.894 I/O Commands 00:07:03.894 ------------ 00:07:03.894 Flush (00h): Supported LBA-Change 00:07:03.894 Write (01h): Supported LBA-Change 00:07:03.894 Read (02h): Supported 00:07:03.894 Compare (05h): Supported 00:07:03.894 Write Zeroes (08h): Supported LBA-Change 00:07:03.894 Dataset Management (09h): Supported LBA-Change 00:07:03.894 Unknown (0Ch): Supported 00:07:03.894 Unknown (12h): Supported 00:07:03.894 Copy (19h): Supported LBA-Change 00:07:03.894 Unknown (1Dh): Supported LBA-Change 00:07:03.894 00:07:03.894 Error Log 00:07:03.894 ========= 00:07:03.894 00:07:03.894 Arbitration 00:07:03.894 =========== 00:07:03.894 Arbitration Burst: no limit 00:07:03.894 00:07:03.894 Power Management 00:07:03.894 ================ 00:07:03.894 Number of Power States: 1 00:07:03.894 Current Power State: Power State #0 00:07:03.894 Power State #0: 00:07:03.894 Max Power: 25.00 W 00:07:03.894 Non-Operational State: Operational 00:07:03.894 Entry Latency: 16 microseconds 00:07:03.894 Exit Latency: 4 microseconds 00:07:03.894 Relative Read Throughput: 0 00:07:03.894 Relative Read Latency: 0 00:07:03.894 Relative Write Throughput: 0 00:07:03.894 Relative Write Latency: 0 00:07:03.894 Idle Power[2024-12-10 02:54:58.167122] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62824 terminated unexpected 00:07:03.894 : Not Reported 00:07:03.894 Active Power: Not Reported 00:07:03.894 Non-Operational Permissive Mode: Not Supported 00:07:03.894 00:07:03.894 Health Information 00:07:03.894 ================== 00:07:03.894 Critical Warnings: 00:07:03.894 Available Spare Space: OK 00:07:03.894 Temperature: OK 00:07:03.894 Device Reliability: OK 00:07:03.894 Read Only: No 00:07:03.894 Volatile Memory Backup: OK 00:07:03.894 Current Temperature: 323 Kelvin (50 Celsius) 00:07:03.894 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:03.894 Available Spare: 0% 00:07:03.894 Available Spare Threshold: 0% 00:07:03.894 Life Percentage Used: 0% 00:07:03.894 Data Units Read: 673 00:07:03.894 Data Units Written: 601 00:07:03.894 Host Read Commands: 38036 00:07:03.894 Host Write Commands: 37822 00:07:03.894 Controller Busy Time: 0 minutes 00:07:03.894 Power Cycles: 0 00:07:03.894 Power On Hours: 0 hours 00:07:03.894 Unsafe Shutdowns: 0 00:07:03.894 Unrecoverable Media Errors: 0 00:07:03.894 Lifetime Error Log Entries: 0 00:07:03.894 Warning Temperature Time: 0 minutes 00:07:03.894 Critical Temperature Time: 0 minutes 00:07:03.894 00:07:03.894 Number of Queues 00:07:03.894 ================ 00:07:03.894 Number of I/O Submission Queues: 64 00:07:03.894 Number of I/O Completion Queues: 64 00:07:03.894 00:07:03.894 ZNS Specific Controller Data 00:07:03.894 ============================ 00:07:03.894 Zone Append Size Limit: 0 00:07:03.894 00:07:03.894 00:07:03.894 Active Namespaces 00:07:03.894 ================= 00:07:03.894 Namespace ID:1 00:07:03.894 Error Recovery Timeout: Unlimited 00:07:03.894 Command Set Identifier: NVM (00h) 00:07:03.894 Deallocate: Supported 00:07:03.894 Deallocated/Unwritten Error: Supported 00:07:03.894 Deallocated Read Value: All 0x00 00:07:03.895 Deallocate in Write Zeroes: Not Supported 00:07:03.895 Deallocated Guard Field: 0xFFFF 00:07:03.895 Flush: Supported 00:07:03.895 Reservation: Not Supported 00:07:03.895 Metadata Transferred as: Separate Metadata Buffer 00:07:03.895 Namespace Sharing Capabilities: Private 00:07:03.895 Size (in LBAs): 1548666 (5GiB) 00:07:03.895 Capacity (in LBAs): 1548666 (5GiB) 00:07:03.895 Utilization (in LBAs): 1548666 (5GiB) 00:07:03.895 Thin Provisioning: Not Supported 00:07:03.895 Per-NS Atomic Units: No 00:07:03.895 Maximum Single Source Range Length: 128 00:07:03.895 Maximum Copy Length: 128 00:07:03.895 Maximum Source Range Count: 128 00:07:03.895 NGUID/EUI64 Never Reused: No 00:07:03.895 Namespace Write Protected: No 00:07:03.895 Number of LBA Formats: 8 00:07:03.895 Current LBA Format: LBA Format #07 00:07:03.895 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:03.895 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:03.895 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:03.895 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:03.895 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:03.895 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:03.895 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:03.895 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:03.895 00:07:03.895 NVM Specific Namespace Data 00:07:03.895 =========================== 00:07:03.895 Logical Block Storage Tag Mask: 0 00:07:03.895 Protection Information Capabilities: 00:07:03.895 16b Guard Protection Information Storage Tag Support: No 00:07:03.895 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:03.895 Storage Tag Check Read Support: No 00:07:03.895 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.895 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.895 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.895 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.895 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.895 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.895 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.895 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.895 ===================================================== 00:07:03.895 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:03.895 ===================================================== 00:07:03.895 Controller Capabilities/Features 00:07:03.895 ================================ 00:07:03.895 Vendor ID: 1b36 00:07:03.895 Subsystem Vendor ID: 1af4 00:07:03.895 Serial Number: 12341 00:07:03.895 Model Number: QEMU NVMe Ctrl 00:07:03.895 Firmware Version: 8.0.0 00:07:03.895 Recommended Arb Burst: 6 00:07:03.895 IEEE OUI Identifier: 00 54 52 00:07:03.895 Multi-path I/O 00:07:03.895 May have multiple subsystem ports: No 00:07:03.895 May have multiple controllers: No 00:07:03.895 Associated with SR-IOV VF: No 00:07:03.895 Max Data Transfer Size: 524288 00:07:03.895 Max Number of Namespaces: 256 00:07:03.895 Max Number of I/O Queues: 64 00:07:03.895 NVMe Specification Version (VS): 1.4 00:07:03.895 NVMe Specification Version (Identify): 1.4 00:07:03.895 Maximum Queue Entries: 2048 00:07:03.895 Contiguous Queues Required: Yes 00:07:03.895 Arbitration Mechanisms Supported 00:07:03.895 Weighted Round Robin: Not Supported 00:07:03.895 Vendor Specific: Not Supported 00:07:03.895 Reset Timeout: 7500 ms 00:07:03.895 Doorbell Stride: 4 bytes 00:07:03.895 NVM Subsystem Reset: Not Supported 00:07:03.895 Command Sets Supported 00:07:03.895 NVM Command Set: Supported 00:07:03.895 Boot Partition: Not Supported 00:07:03.895 Memory Page Size Minimum: 4096 bytes 00:07:03.895 Memory Page Size Maximum: 65536 bytes 00:07:03.895 Persistent Memory Region: Not Supported 00:07:03.895 Optional Asynchronous Events Supported 00:07:03.895 Namespace Attribute Notices: Supported 00:07:03.895 Firmware Activation Notices: Not Supported 00:07:03.895 ANA Change Notices: Not Supported 00:07:03.895 PLE Aggregate Log Change Notices: Not Supported 00:07:03.895 LBA Status Info Alert Notices: Not Supported 00:07:03.895 EGE Aggregate Log Change Notices: Not Supported 00:07:03.895 Normal NVM Subsystem Shutdown event: Not Supported 00:07:03.895 Zone Descriptor Change Notices: Not Supported 00:07:03.895 Discovery Log Change Notices: Not Supported 00:07:03.895 Controller Attributes 00:07:03.895 128-bit Host Identifier: Not Supported 00:07:03.895 Non-Operational Permissive Mode: Not Supported 00:07:03.895 NVM Sets: Not Supported 00:07:03.895 Read Recovery Levels: Not Supported 00:07:03.895 Endurance Groups: Not Supported 00:07:03.895 Predictable Latency Mode: Not Supported 00:07:03.895 Traffic Based Keep ALive: Not Supported 00:07:03.895 Namespace Granularity: Not Supported 00:07:03.895 SQ Associations: Not Supported 00:07:03.895 UUID List: Not Supported 00:07:03.895 Multi-Domain Subsystem: Not Supported 00:07:03.895 Fixed Capacity Management: Not Supported 00:07:03.895 Variable Capacity Management: Not Supported 00:07:03.895 Delete Endurance Group: Not Supported 00:07:03.895 Delete NVM Set: Not Supported 00:07:03.895 Extended LBA Formats Supported: Supported 00:07:03.895 Flexible Data Placement Supported: Not Supported 00:07:03.895 00:07:03.895 Controller Memory Buffer Support 00:07:03.895 ================================ 00:07:03.895 Supported: No 00:07:03.895 00:07:03.895 Persistent Memory Region Support 00:07:03.895 ================================ 00:07:03.895 Supported: No 00:07:03.895 00:07:03.895 Admin Command Set Attributes 00:07:03.895 ============================ 00:07:03.895 Security Send/Receive: Not Supported 00:07:03.895 Format NVM: Supported 00:07:03.895 Firmware Activate/Download: Not Supported 00:07:03.895 Namespace Management: Supported 00:07:03.895 Device Self-Test: Not Supported 00:07:03.895 Directives: Supported 00:07:03.895 NVMe-MI: Not Supported 00:07:03.895 Virtualization Management: Not Supported 00:07:03.895 Doorbell Buffer Config: Supported 00:07:03.895 Get LBA Status Capability: Not Supported 00:07:03.895 Command & Feature Lockdown Capability: Not Supported 00:07:03.895 Abort Command Limit: 4 00:07:03.895 Async Event Request Limit: 4 00:07:03.895 Number of Firmware Slots: N/A 00:07:03.895 Firmware Slot 1 Read-Only: N/A 00:07:03.895 Firmware Activation Without Reset: N/A 00:07:03.895 Multiple Update Detection Support: N/A 00:07:03.895 Firmware Update Granularity: No Information Provided 00:07:03.895 Per-Namespace SMART Log: Yes 00:07:03.895 Asymmetric Namespace Access Log Page: Not Supported 00:07:03.895 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:03.895 Command Effects Log Page: Supported 00:07:03.895 Get Log Page Extended Data: Supported 00:07:03.895 Telemetry Log Pages: Not Supported 00:07:03.895 Persistent Event Log Pages: Not Supported 00:07:03.895 Supported Log Pages Log Page: May Support 00:07:03.895 Commands Supported & Effects Log Page: Not Supported 00:07:03.895 Feature Identifiers & Effects Log Page:May Support 00:07:03.895 NVMe-MI Commands & Effects Log Page: May Support 00:07:03.895 Data Area 4 for Telemetry Log: Not Supported 00:07:03.895 Error Log Page Entries Supported: 1 00:07:03.895 Keep Alive: Not Supported 00:07:03.895 00:07:03.895 NVM Command Set Attributes 00:07:03.895 ========================== 00:07:03.895 Submission Queue Entry Size 00:07:03.895 Max: 64 00:07:03.895 Min: 64 00:07:03.895 Completion Queue Entry Size 00:07:03.895 Max: 16 00:07:03.895 Min: 16 00:07:03.895 Number of Namespaces: 256 00:07:03.895 Compare Command: Supported 00:07:03.895 Write Uncorrectable Command: Not Supported 00:07:03.895 Dataset Management Command: Supported 00:07:03.895 Write Zeroes Command: Supported 00:07:03.895 Set Features Save Field: Supported 00:07:03.895 Reservations: Not Supported 00:07:03.895 Timestamp: Supported 00:07:03.895 Copy: Supported 00:07:03.895 Volatile Write Cache: Present 00:07:03.895 Atomic Write Unit (Normal): 1 00:07:03.895 Atomic Write Unit (PFail): 1 00:07:03.895 Atomic Compare & Write Unit: 1 00:07:03.895 Fused Compare & Write: Not Supported 00:07:03.895 Scatter-Gather List 00:07:03.895 SGL Command Set: Supported 00:07:03.895 SGL Keyed: Not Supported 00:07:03.895 SGL Bit Bucket Descriptor: Not Supported 00:07:03.895 SGL Metadata Pointer: Not Supported 00:07:03.895 Oversized SGL: Not Supported 00:07:03.895 SGL Metadata Address: Not Supported 00:07:03.895 SGL Offset: Not Supported 00:07:03.895 Transport SGL Data Block: Not Supported 00:07:03.895 Replay Protected Memory Block: Not Supported 00:07:03.895 00:07:03.895 Firmware Slot Information 00:07:03.895 ========================= 00:07:03.895 Active slot: 1 00:07:03.895 Slot 1 Firmware Revision: 1.0 00:07:03.895 00:07:03.895 00:07:03.895 Commands Supported and Effects 00:07:03.895 ============================== 00:07:03.895 Admin Commands 00:07:03.895 -------------- 00:07:03.895 Delete I/O Submission Queue (00h): Supported 00:07:03.895 Create I/O Submission Queue (01h): Supported 00:07:03.895 Get Log Page (02h): Supported 00:07:03.895 Delete I/O Completion Queue (04h): Supported 00:07:03.895 Create I/O Completion Queue (05h): Supported 00:07:03.895 Identify (06h): Supported 00:07:03.895 Abort (08h): Supported 00:07:03.896 Set Features (09h): Supported 00:07:03.896 Get Features (0Ah): Supported 00:07:03.896 Asynchronous Event Request (0Ch): Supported 00:07:03.896 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:03.896 Directive Send (19h): Supported 00:07:03.896 Directive Receive (1Ah): Supported 00:07:03.896 Virtualization Management (1Ch): Supported 00:07:03.896 Doorbell Buffer Config (7Ch): Supported 00:07:03.896 Format NVM (80h): Supported LBA-Change 00:07:03.896 I/O Commands 00:07:03.896 ------------ 00:07:03.896 Flush (00h): Supported LBA-Change 00:07:03.896 Write (01h): Supported LBA-Change 00:07:03.896 Read (02h): Supported 00:07:03.896 Compare (05h): Supported 00:07:03.896 Write Zeroes (08h): Supported LBA-Change 00:07:03.896 Dataset Management (09h): Supported LBA-Change 00:07:03.896 Unknown (0Ch): Supported 00:07:03.896 Unknown (12h): Supported 00:07:03.896 Copy (19h): Supported LBA-Change 00:07:03.896 Unknown (1Dh): Supported LBA-Change 00:07:03.896 00:07:03.896 Error Log 00:07:03.896 ========= 00:07:03.896 00:07:03.896 Arbitration 00:07:03.896 =========== 00:07:03.896 Arbitration Burst: no limit 00:07:03.896 00:07:03.896 Power Management 00:07:03.896 ================ 00:07:03.896 Number of Power States: 1 00:07:03.896 Current Power State: Power State #0 00:07:03.896 Power State #0: 00:07:03.896 Max Power: 25.00 W 00:07:03.896 Non-Operational State: Operational 00:07:03.896 Entry Latency: 16 microseconds 00:07:03.896 Exit Latency: 4 microseconds 00:07:03.896 Relative Read Throughput: 0 00:07:03.896 Relative Read Latency: 0 00:07:03.896 Relative Write Throughput: 0 00:07:03.896 Relative Write Latency: 0 00:07:03.896 Idle Power: Not Reported 00:07:03.896 Active Power: Not Reported 00:07:03.896 Non-Operational Permissive Mode: Not Supported 00:07:03.896 00:07:03.896 Health Information 00:07:03.896 ================== 00:07:03.896 Critical Warnings: 00:07:03.896 Available Spare Space: OK 00:07:03.896 Temperature: [2024-12-10 02:54:58.168089] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62824 terminated unexpected 00:07:03.896 OK 00:07:03.896 Device Reliability: OK 00:07:03.896 Read Only: No 00:07:03.896 Volatile Memory Backup: OK 00:07:03.896 Current Temperature: 323 Kelvin (50 Celsius) 00:07:03.896 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:03.896 Available Spare: 0% 00:07:03.896 Available Spare Threshold: 0% 00:07:03.896 Life Percentage Used: 0% 00:07:03.896 Data Units Read: 1009 00:07:03.896 Data Units Written: 869 00:07:03.896 Host Read Commands: 55390 00:07:03.896 Host Write Commands: 54081 00:07:03.896 Controller Busy Time: 0 minutes 00:07:03.896 Power Cycles: 0 00:07:03.896 Power On Hours: 0 hours 00:07:03.896 Unsafe Shutdowns: 0 00:07:03.896 Unrecoverable Media Errors: 0 00:07:03.896 Lifetime Error Log Entries: 0 00:07:03.896 Warning Temperature Time: 0 minutes 00:07:03.896 Critical Temperature Time: 0 minutes 00:07:03.896 00:07:03.896 Number of Queues 00:07:03.896 ================ 00:07:03.896 Number of I/O Submission Queues: 64 00:07:03.896 Number of I/O Completion Queues: 64 00:07:03.896 00:07:03.896 ZNS Specific Controller Data 00:07:03.896 ============================ 00:07:03.896 Zone Append Size Limit: 0 00:07:03.896 00:07:03.896 00:07:03.896 Active Namespaces 00:07:03.896 ================= 00:07:03.896 Namespace ID:1 00:07:03.896 Error Recovery Timeout: Unlimited 00:07:03.896 Command Set Identifier: NVM (00h) 00:07:03.896 Deallocate: Supported 00:07:03.896 Deallocated/Unwritten Error: Supported 00:07:03.896 Deallocated Read Value: All 0x00 00:07:03.896 Deallocate in Write Zeroes: Not Supported 00:07:03.896 Deallocated Guard Field: 0xFFFF 00:07:03.896 Flush: Supported 00:07:03.896 Reservation: Not Supported 00:07:03.896 Namespace Sharing Capabilities: Private 00:07:03.896 Size (in LBAs): 1310720 (5GiB) 00:07:03.896 Capacity (in LBAs): 1310720 (5GiB) 00:07:03.896 Utilization (in LBAs): 1310720 (5GiB) 00:07:03.896 Thin Provisioning: Not Supported 00:07:03.896 Per-NS Atomic Units: No 00:07:03.896 Maximum Single Source Range Length: 128 00:07:03.896 Maximum Copy Length: 128 00:07:03.896 Maximum Source Range Count: 128 00:07:03.896 NGUID/EUI64 Never Reused: No 00:07:03.896 Namespace Write Protected: No 00:07:03.896 Number of LBA Formats: 8 00:07:03.896 Current LBA Format: LBA Format #04 00:07:03.896 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:03.896 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:03.896 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:03.896 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:03.896 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:03.896 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:03.896 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:03.896 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:03.896 00:07:03.896 NVM Specific Namespace Data 00:07:03.896 =========================== 00:07:03.896 Logical Block Storage Tag Mask: 0 00:07:03.896 Protection Information Capabilities: 00:07:03.896 16b Guard Protection Information Storage Tag Support: No 00:07:03.896 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:03.896 Storage Tag Check Read Support: No 00:07:03.896 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.896 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.896 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.896 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.896 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.896 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.896 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.896 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.896 ===================================================== 00:07:03.896 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:03.896 ===================================================== 00:07:03.896 Controller Capabilities/Features 00:07:03.896 ================================ 00:07:03.896 Vendor ID: 1b36 00:07:03.896 Subsystem Vendor ID: 1af4 00:07:03.896 Serial Number: 12343 00:07:03.896 Model Number: QEMU NVMe Ctrl 00:07:03.896 Firmware Version: 8.0.0 00:07:03.896 Recommended Arb Burst: 6 00:07:03.896 IEEE OUI Identifier: 00 54 52 00:07:03.896 Multi-path I/O 00:07:03.896 May have multiple subsystem ports: No 00:07:03.896 May have multiple controllers: Yes 00:07:03.896 Associated with SR-IOV VF: No 00:07:03.896 Max Data Transfer Size: 524288 00:07:03.896 Max Number of Namespaces: 256 00:07:03.896 Max Number of I/O Queues: 64 00:07:03.896 NVMe Specification Version (VS): 1.4 00:07:03.896 NVMe Specification Version (Identify): 1.4 00:07:03.896 Maximum Queue Entries: 2048 00:07:03.896 Contiguous Queues Required: Yes 00:07:03.896 Arbitration Mechanisms Supported 00:07:03.896 Weighted Round Robin: Not Supported 00:07:03.896 Vendor Specific: Not Supported 00:07:03.896 Reset Timeout: 7500 ms 00:07:03.896 Doorbell Stride: 4 bytes 00:07:03.896 NVM Subsystem Reset: Not Supported 00:07:03.896 Command Sets Supported 00:07:03.896 NVM Command Set: Supported 00:07:03.896 Boot Partition: Not Supported 00:07:03.896 Memory Page Size Minimum: 4096 bytes 00:07:03.896 Memory Page Size Maximum: 65536 bytes 00:07:03.896 Persistent Memory Region: Not Supported 00:07:03.896 Optional Asynchronous Events Supported 00:07:03.896 Namespace Attribute Notices: Supported 00:07:03.896 Firmware Activation Notices: Not Supported 00:07:03.896 ANA Change Notices: Not Supported 00:07:03.896 PLE Aggregate Log Change Notices: Not Supported 00:07:03.896 LBA Status Info Alert Notices: Not Supported 00:07:03.896 EGE Aggregate Log Change Notices: Not Supported 00:07:03.896 Normal NVM Subsystem Shutdown event: Not Supported 00:07:03.896 Zone Descriptor Change Notices: Not Supported 00:07:03.896 Discovery Log Change Notices: Not Supported 00:07:03.896 Controller Attributes 00:07:03.896 128-bit Host Identifier: Not Supported 00:07:03.896 Non-Operational Permissive Mode: Not Supported 00:07:03.896 NVM Sets: Not Supported 00:07:03.896 Read Recovery Levels: Not Supported 00:07:03.896 Endurance Groups: Supported 00:07:03.896 Predictable Latency Mode: Not Supported 00:07:03.896 Traffic Based Keep ALive: Not Supported 00:07:03.896 Namespace Granularity: Not Supported 00:07:03.896 SQ Associations: Not Supported 00:07:03.896 UUID List: Not Supported 00:07:03.896 Multi-Domain Subsystem: Not Supported 00:07:03.896 Fixed Capacity Management: Not Supported 00:07:03.896 Variable Capacity Management: Not Supported 00:07:03.896 Delete Endurance Group: Not Supported 00:07:03.896 Delete NVM Set: Not Supported 00:07:03.896 Extended LBA Formats Supported: Supported 00:07:03.896 Flexible Data Placement Supported: Supported 00:07:03.896 00:07:03.896 Controller Memory Buffer Support 00:07:03.896 ================================ 00:07:03.896 Supported: No 00:07:03.896 00:07:03.896 Persistent Memory Region Support 00:07:03.896 ================================ 00:07:03.896 Supported: No 00:07:03.896 00:07:03.896 Admin Command Set Attributes 00:07:03.897 ============================ 00:07:03.897 Security Send/Receive: Not Supported 00:07:03.897 Format NVM: Supported 00:07:03.897 Firmware Activate/Download: Not Supported 00:07:03.897 Namespace Management: Supported 00:07:03.897 Device Self-Test: Not Supported 00:07:03.897 Directives: Supported 00:07:03.897 NVMe-MI: Not Supported 00:07:03.897 Virtualization Management: Not Supported 00:07:03.897 Doorbell Buffer Config: Supported 00:07:03.897 Get LBA Status Capability: Not Supported 00:07:03.897 Command & Feature Lockdown Capability: Not Supported 00:07:03.897 Abort Command Limit: 4 00:07:03.897 Async Event Request Limit: 4 00:07:03.897 Number of Firmware Slots: N/A 00:07:03.897 Firmware Slot 1 Read-Only: N/A 00:07:03.897 Firmware Activation Without Reset: N/A 00:07:03.897 Multiple Update Detection Support: N/A 00:07:03.897 Firmware Update Granularity: No Information Provided 00:07:03.897 Per-Namespace SMART Log: Yes 00:07:03.897 Asymmetric Namespace Access Log Page: Not Supported 00:07:03.897 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:03.897 Command Effects Log Page: Supported 00:07:03.897 Get Log Page Extended Data: Supported 00:07:03.897 Telemetry Log Pages: Not Supported 00:07:03.897 Persistent Event Log Pages: Not Supported 00:07:03.897 Supported Log Pages Log Page: May Support 00:07:03.897 Commands Supported & Effects Log Page: Not Supported 00:07:03.897 Feature Identifiers & Effects Log Page:May Support 00:07:03.897 NVMe-MI Commands & Effects Log Page: May Support 00:07:03.897 Data Area 4 for Telemetry Log: Not Supported 00:07:03.897 Error Log Page Entries Supported: 1 00:07:03.897 Keep Alive: Not Supported 00:07:03.897 00:07:03.897 NVM Command Set Attributes 00:07:03.897 ========================== 00:07:03.897 Submission Queue Entry Size 00:07:03.897 Max: 64 00:07:03.897 Min: 64 00:07:03.897 Completion Queue Entry Size 00:07:03.897 Max: 16 00:07:03.897 Min: 16 00:07:03.897 Number of Namespaces: 256 00:07:03.897 Compare Command: Supported 00:07:03.897 Write Uncorrectable Command: Not Supported 00:07:03.897 Dataset Management Command: Supported 00:07:03.897 Write Zeroes Command: Supported 00:07:03.897 Set Features Save Field: Supported 00:07:03.897 Reservations: Not Supported 00:07:03.897 Timestamp: Supported 00:07:03.897 Copy: Supported 00:07:03.897 Volatile Write Cache: Present 00:07:03.897 Atomic Write Unit (Normal): 1 00:07:03.897 Atomic Write Unit (PFail): 1 00:07:03.897 Atomic Compare & Write Unit: 1 00:07:03.897 Fused Compare & Write: Not Supported 00:07:03.897 Scatter-Gather List 00:07:03.897 SGL Command Set: Supported 00:07:03.897 SGL Keyed: Not Supported 00:07:03.897 SGL Bit Bucket Descriptor: Not Supported 00:07:03.897 SGL Metadata Pointer: Not Supported 00:07:03.897 Oversized SGL: Not Supported 00:07:03.897 SGL Metadata Address: Not Supported 00:07:03.897 SGL Offset: Not Supported 00:07:03.897 Transport SGL Data Block: Not Supported 00:07:03.897 Replay Protected Memory Block: Not Supported 00:07:03.897 00:07:03.897 Firmware Slot Information 00:07:03.897 ========================= 00:07:03.897 Active slot: 1 00:07:03.897 Slot 1 Firmware Revision: 1.0 00:07:03.897 00:07:03.897 00:07:03.897 Commands Supported and Effects 00:07:03.897 ============================== 00:07:03.897 Admin Commands 00:07:03.897 -------------- 00:07:03.897 Delete I/O Submission Queue (00h): Supported 00:07:03.897 Create I/O Submission Queue (01h): Supported 00:07:03.897 Get Log Page (02h): Supported 00:07:03.897 Delete I/O Completion Queue (04h): Supported 00:07:03.897 Create I/O Completion Queue (05h): Supported 00:07:03.897 Identify (06h): Supported 00:07:03.897 Abort (08h): Supported 00:07:03.897 Set Features (09h): Supported 00:07:03.897 Get Features (0Ah): Supported 00:07:03.897 Asynchronous Event Request (0Ch): Supported 00:07:03.897 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:03.897 Directive Send (19h): Supported 00:07:03.897 Directive Receive (1Ah): Supported 00:07:03.897 Virtualization Management (1Ch): Supported 00:07:03.897 Doorbell Buffer Config (7Ch): Supported 00:07:03.897 Format NVM (80h): Supported LBA-Change 00:07:03.897 I/O Commands 00:07:03.897 ------------ 00:07:03.897 Flush (00h): Supported LBA-Change 00:07:03.897 Write (01h): Supported LBA-Change 00:07:03.897 Read (02h): Supported 00:07:03.897 Compare (05h): Supported 00:07:03.897 Write Zeroes (08h): Supported LBA-Change 00:07:03.897 Dataset Management (09h): Supported LBA-Change 00:07:03.897 Unknown (0Ch): Supported 00:07:03.897 Unknown (12h): Supported 00:07:03.897 Copy (19h): Supported LBA-Change 00:07:03.897 Unknown (1Dh): Supported LBA-Change 00:07:03.897 00:07:03.897 Error Log 00:07:03.897 ========= 00:07:03.897 00:07:03.897 Arbitration 00:07:03.897 =========== 00:07:03.897 Arbitration Burst: no limit 00:07:03.897 00:07:03.897 Power Management 00:07:03.897 ================ 00:07:03.897 Number of Power States: 1 00:07:03.897 Current Power State: Power State #0 00:07:03.897 Power State #0: 00:07:03.897 Max Power: 25.00 W 00:07:03.897 Non-Operational State: Operational 00:07:03.897 Entry Latency: 16 microseconds 00:07:03.897 Exit Latency: 4 microseconds 00:07:03.897 Relative Read Throughput: 0 00:07:03.897 Relative Read Latency: 0 00:07:03.897 Relative Write Throughput: 0 00:07:03.897 Relative Write Latency: 0 00:07:03.897 Idle Power: Not Reported 00:07:03.897 Active Power: Not Reported 00:07:03.897 Non-Operational Permissive Mode: Not Supported 00:07:03.897 00:07:03.897 Health Information 00:07:03.897 ================== 00:07:03.897 Critical Warnings: 00:07:03.897 Available Spare Space: OK 00:07:03.897 Temperature: OK 00:07:03.897 Device Reliability: OK 00:07:03.897 Read Only: No 00:07:03.897 Volatile Memory Backup: OK 00:07:03.897 Current Temperature: 323 Kelvin (50 Celsius) 00:07:03.897 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:03.897 Available Spare: 0% 00:07:03.897 Available Spare Threshold: 0% 00:07:03.897 Life Percentage Used: 0% 00:07:03.897 Data Units Read: 821 00:07:03.897 Data Units Written: 750 00:07:03.897 Host Read Commands: 39669 00:07:03.897 Host Write Commands: 39093 00:07:03.897 Controller Busy Time: 0 minutes 00:07:03.897 Power Cycles: 0 00:07:03.897 Power On Hours: 0 hours 00:07:03.897 Unsafe Shutdowns: 0 00:07:03.897 Unrecoverable Media Errors: 0 00:07:03.897 Lifetime Error Log Entries: 0 00:07:03.897 Warning Temperature Time: 0 minutes 00:07:03.897 Critical Temperature Time: 0 minutes 00:07:03.897 00:07:03.897 Number of Queues 00:07:03.897 ================ 00:07:03.897 Number of I/O Submission Queues: 64 00:07:03.897 Number of I/O Completion Queues: 64 00:07:03.897 00:07:03.897 ZNS Specific Controller Data 00:07:03.897 ============================ 00:07:03.897 Zone Append Size Limit: 0 00:07:03.897 00:07:03.897 00:07:03.897 Active Namespaces 00:07:03.897 ================= 00:07:03.897 Namespace ID:1 00:07:03.897 Error Recovery Timeout: Unlimited 00:07:03.897 Command Set Identifier: NVM (00h) 00:07:03.897 Deallocate: Supported 00:07:03.897 Deallocated/Unwritten Error: Supported 00:07:03.897 Deallocated Read Value: All 0x00 00:07:03.897 Deallocate in Write Zeroes: Not Supported 00:07:03.897 Deallocated Guard Field: 0xFFFF 00:07:03.897 Flush: Supported 00:07:03.897 Reservation: Not Supported 00:07:03.897 Namespace Sharing Capabilities: Multiple Controllers 00:07:03.897 Size (in LBAs): 262144 (1GiB) 00:07:03.897 Capacity (in LBAs): 262144 (1GiB) 00:07:03.897 Utilization (in LBAs): 262144 (1GiB) 00:07:03.897 Thin Provisioning: Not Supported 00:07:03.897 Per-NS Atomic Units: No 00:07:03.897 Maximum Single Source Range Length: 128 00:07:03.897 Maximum Copy Length: 128 00:07:03.897 Maximum Source Range Count: 128 00:07:03.897 NGUID/EUI64 Never Reused: No 00:07:03.897 Namespace Write Protected: No 00:07:03.897 Endurance group ID: 1 00:07:03.897 Number of LBA Formats: 8 00:07:03.897 Current LBA Format: LBA Format #04 00:07:03.897 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:03.897 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:03.897 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:03.897 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:03.897 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:03.897 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:03.897 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:03.897 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:03.897 00:07:03.897 Get Feature FDP: 00:07:03.897 ================ 00:07:03.897 Enabled: Yes 00:07:03.897 FDP configuration index: 0 00:07:03.897 00:07:03.897 FDP configurations log page 00:07:03.897 =========================== 00:07:03.897 Number of FDP configurations: 1 00:07:03.897 Version: 0 00:07:03.897 Size: 112 00:07:03.897 FDP Configuration Descriptor: 0 00:07:03.897 Descriptor Size: 96 00:07:03.897 Reclaim Group Identifier format: 2 00:07:03.897 FDP Volatile Write Cache: Not Present 00:07:03.897 FDP Configuration: Valid 00:07:03.898 Vendor Specific Size: 0 00:07:03.898 Number of Reclaim Groups: 2 00:07:03.898 Number of Recalim Unit Handles: 8 00:07:03.898 Max Placement Identifiers: 128 00:07:03.898 Number of Namespaces Suppprted: 256 00:07:03.898 Reclaim unit Nominal Size: 6000000 bytes 00:07:03.898 Estimated Reclaim Unit Time Limit: Not Reported 00:07:03.898 RUH Desc #000: RUH Type: Initially Isolated 00:07:03.898 RUH Desc #001: RUH Type: Initially Isolated 00:07:03.898 RUH Desc #002: RUH Type: Initially Isolated 00:07:03.898 RUH Desc #003: RUH Type: Initially Isolated 00:07:03.898 RUH Desc #004: RUH Type: Initially Isolated 00:07:03.898 RUH Desc #005: RUH Type: Initially Isolated 00:07:03.898 RUH Desc #006: RUH Type: Initially Isolated 00:07:03.898 RUH Desc #007: RUH Type: Initially Isolated 00:07:03.898 00:07:03.898 FDP reclaim unit handle usage log page 00:07:03.898 ====================================== 00:07:03.898 Number of Reclaim Unit Handles: 8 00:07:03.898 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:03.898 RUH Usage Desc #001: RUH Attributes: Unused 00:07:03.898 RUH Usage Desc #002: RUH Attributes: Unused 00:07:03.898 RUH Usage Desc #003: RUH Attributes: Unused 00:07:03.898 RUH Usage Desc #004: RUH Attributes: Unused 00:07:03.898 RUH Usage Desc #005: RUH Attributes: Unused 00:07:03.898 RUH Usage Desc #006: RUH Attributes: Unused 00:07:03.898 RUH Usage Desc #007: RUH Attributes: Unused 00:07:03.898 00:07:03.898 FDP statistics log page 00:07:03.898 ======================= 00:07:03.898 Host bytes with metadata written: 457023488 00:07:03.898 Medi[2024-12-10 02:54:58.169651] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62824 terminated unexpected 00:07:03.898 a bytes with metadata written: 457076736 00:07:03.898 Media bytes erased: 0 00:07:03.898 00:07:03.898 FDP events log page 00:07:03.898 =================== 00:07:03.898 Number of FDP events: 0 00:07:03.898 00:07:03.898 NVM Specific Namespace Data 00:07:03.898 =========================== 00:07:03.898 Logical Block Storage Tag Mask: 0 00:07:03.898 Protection Information Capabilities: 00:07:03.898 16b Guard Protection Information Storage Tag Support: No 00:07:03.898 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:03.898 Storage Tag Check Read Support: No 00:07:03.898 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.898 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.898 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.898 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.898 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.898 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.898 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.898 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.898 ===================================================== 00:07:03.898 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:03.898 ===================================================== 00:07:03.898 Controller Capabilities/Features 00:07:03.898 ================================ 00:07:03.898 Vendor ID: 1b36 00:07:03.898 Subsystem Vendor ID: 1af4 00:07:03.898 Serial Number: 12342 00:07:03.898 Model Number: QEMU NVMe Ctrl 00:07:03.898 Firmware Version: 8.0.0 00:07:03.898 Recommended Arb Burst: 6 00:07:03.898 IEEE OUI Identifier: 00 54 52 00:07:03.898 Multi-path I/O 00:07:03.898 May have multiple subsystem ports: No 00:07:03.898 May have multiple controllers: No 00:07:03.898 Associated with SR-IOV VF: No 00:07:03.898 Max Data Transfer Size: 524288 00:07:03.898 Max Number of Namespaces: 256 00:07:03.898 Max Number of I/O Queues: 64 00:07:03.898 NVMe Specification Version (VS): 1.4 00:07:03.898 NVMe Specification Version (Identify): 1.4 00:07:03.898 Maximum Queue Entries: 2048 00:07:03.898 Contiguous Queues Required: Yes 00:07:03.898 Arbitration Mechanisms Supported 00:07:03.898 Weighted Round Robin: Not Supported 00:07:03.898 Vendor Specific: Not Supported 00:07:03.898 Reset Timeout: 7500 ms 00:07:03.898 Doorbell Stride: 4 bytes 00:07:03.898 NVM Subsystem Reset: Not Supported 00:07:03.898 Command Sets Supported 00:07:03.898 NVM Command Set: Supported 00:07:03.898 Boot Partition: Not Supported 00:07:03.898 Memory Page Size Minimum: 4096 bytes 00:07:03.898 Memory Page Size Maximum: 65536 bytes 00:07:03.898 Persistent Memory Region: Not Supported 00:07:03.898 Optional Asynchronous Events Supported 00:07:03.898 Namespace Attribute Notices: Supported 00:07:03.898 Firmware Activation Notices: Not Supported 00:07:03.898 ANA Change Notices: Not Supported 00:07:03.898 PLE Aggregate Log Change Notices: Not Supported 00:07:03.898 LBA Status Info Alert Notices: Not Supported 00:07:03.898 EGE Aggregate Log Change Notices: Not Supported 00:07:03.898 Normal NVM Subsystem Shutdown event: Not Supported 00:07:03.898 Zone Descriptor Change Notices: Not Supported 00:07:03.898 Discovery Log Change Notices: Not Supported 00:07:03.898 Controller Attributes 00:07:03.898 128-bit Host Identifier: Not Supported 00:07:03.898 Non-Operational Permissive Mode: Not Supported 00:07:03.898 NVM Sets: Not Supported 00:07:03.898 Read Recovery Levels: Not Supported 00:07:03.898 Endurance Groups: Not Supported 00:07:03.898 Predictable Latency Mode: Not Supported 00:07:03.898 Traffic Based Keep ALive: Not Supported 00:07:03.898 Namespace Granularity: Not Supported 00:07:03.898 SQ Associations: Not Supported 00:07:03.898 UUID List: Not Supported 00:07:03.898 Multi-Domain Subsystem: Not Supported 00:07:03.898 Fixed Capacity Management: Not Supported 00:07:03.898 Variable Capacity Management: Not Supported 00:07:03.898 Delete Endurance Group: Not Supported 00:07:03.898 Delete NVM Set: Not Supported 00:07:03.898 Extended LBA Formats Supported: Supported 00:07:03.898 Flexible Data Placement Supported: Not Supported 00:07:03.898 00:07:03.898 Controller Memory Buffer Support 00:07:03.898 ================================ 00:07:03.898 Supported: No 00:07:03.898 00:07:03.898 Persistent Memory Region Support 00:07:03.898 ================================ 00:07:03.898 Supported: No 00:07:03.898 00:07:03.898 Admin Command Set Attributes 00:07:03.898 ============================ 00:07:03.898 Security Send/Receive: Not Supported 00:07:03.898 Format NVM: Supported 00:07:03.898 Firmware Activate/Download: Not Supported 00:07:03.898 Namespace Management: Supported 00:07:03.898 Device Self-Test: Not Supported 00:07:03.898 Directives: Supported 00:07:03.898 NVMe-MI: Not Supported 00:07:03.898 Virtualization Management: Not Supported 00:07:03.898 Doorbell Buffer Config: Supported 00:07:03.898 Get LBA Status Capability: Not Supported 00:07:03.898 Command & Feature Lockdown Capability: Not Supported 00:07:03.898 Abort Command Limit: 4 00:07:03.898 Async Event Request Limit: 4 00:07:03.898 Number of Firmware Slots: N/A 00:07:03.898 Firmware Slot 1 Read-Only: N/A 00:07:03.898 Firmware Activation Without Reset: N/A 00:07:03.898 Multiple Update Detection Support: N/A 00:07:03.898 Firmware Update Granularity: No Information Provided 00:07:03.898 Per-Namespace SMART Log: Yes 00:07:03.898 Asymmetric Namespace Access Log Page: Not Supported 00:07:03.898 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:03.898 Command Effects Log Page: Supported 00:07:03.898 Get Log Page Extended Data: Supported 00:07:03.898 Telemetry Log Pages: Not Supported 00:07:03.898 Persistent Event Log Pages: Not Supported 00:07:03.898 Supported Log Pages Log Page: May Support 00:07:03.898 Commands Supported & Effects Log Page: Not Supported 00:07:03.898 Feature Identifiers & Effects Log Page:May Support 00:07:03.898 NVMe-MI Commands & Effects Log Page: May Support 00:07:03.898 Data Area 4 for Telemetry Log: Not Supported 00:07:03.898 Error Log Page Entries Supported: 1 00:07:03.898 Keep Alive: Not Supported 00:07:03.898 00:07:03.898 NVM Command Set Attributes 00:07:03.898 ========================== 00:07:03.898 Submission Queue Entry Size 00:07:03.898 Max: 64 00:07:03.898 Min: 64 00:07:03.898 Completion Queue Entry Size 00:07:03.899 Max: 16 00:07:03.899 Min: 16 00:07:03.899 Number of Namespaces: 256 00:07:03.899 Compare Command: Supported 00:07:03.899 Write Uncorrectable Command: Not Supported 00:07:03.899 Dataset Management Command: Supported 00:07:03.899 Write Zeroes Command: Supported 00:07:03.899 Set Features Save Field: Supported 00:07:03.899 Reservations: Not Supported 00:07:03.899 Timestamp: Supported 00:07:03.899 Copy: Supported 00:07:03.899 Volatile Write Cache: Present 00:07:03.899 Atomic Write Unit (Normal): 1 00:07:03.899 Atomic Write Unit (PFail): 1 00:07:03.899 Atomic Compare & Write Unit: 1 00:07:03.899 Fused Compare & Write: Not Supported 00:07:03.899 Scatter-Gather List 00:07:03.899 SGL Command Set: Supported 00:07:03.899 SGL Keyed: Not Supported 00:07:03.899 SGL Bit Bucket Descriptor: Not Supported 00:07:03.899 SGL Metadata Pointer: Not Supported 00:07:03.899 Oversized SGL: Not Supported 00:07:03.899 SGL Metadata Address: Not Supported 00:07:03.899 SGL Offset: Not Supported 00:07:03.899 Transport SGL Data Block: Not Supported 00:07:03.899 Replay Protected Memory Block: Not Supported 00:07:03.899 00:07:03.899 Firmware Slot Information 00:07:03.899 ========================= 00:07:03.899 Active slot: 1 00:07:03.899 Slot 1 Firmware Revision: 1.0 00:07:03.899 00:07:03.899 00:07:03.899 Commands Supported and Effects 00:07:03.899 ============================== 00:07:03.899 Admin Commands 00:07:03.899 -------------- 00:07:03.899 Delete I/O Submission Queue (00h): Supported 00:07:03.899 Create I/O Submission Queue (01h): Supported 00:07:03.899 Get Log Page (02h): Supported 00:07:03.899 Delete I/O Completion Queue (04h): Supported 00:07:03.899 Create I/O Completion Queue (05h): Supported 00:07:03.899 Identify (06h): Supported 00:07:03.899 Abort (08h): Supported 00:07:03.899 Set Features (09h): Supported 00:07:03.899 Get Features (0Ah): Supported 00:07:03.899 Asynchronous Event Request (0Ch): Supported 00:07:03.899 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:03.899 Directive Send (19h): Supported 00:07:03.899 Directive Receive (1Ah): Supported 00:07:03.899 Virtualization Management (1Ch): Supported 00:07:03.899 Doorbell Buffer Config (7Ch): Supported 00:07:03.899 Format NVM (80h): Supported LBA-Change 00:07:03.899 I/O Commands 00:07:03.899 ------------ 00:07:03.899 Flush (00h): Supported LBA-Change 00:07:03.899 Write (01h): Supported LBA-Change 00:07:03.899 Read (02h): Supported 00:07:03.899 Compare (05h): Supported 00:07:03.899 Write Zeroes (08h): Supported LBA-Change 00:07:03.899 Dataset Management (09h): Supported LBA-Change 00:07:03.899 Unknown (0Ch): Supported 00:07:03.899 Unknown (12h): Supported 00:07:03.899 Copy (19h): Supported LBA-Change 00:07:03.899 Unknown (1Dh): Supported LBA-Change 00:07:03.899 00:07:03.899 Error Log 00:07:03.899 ========= 00:07:03.899 00:07:03.899 Arbitration 00:07:03.899 =========== 00:07:03.899 Arbitration Burst: no limit 00:07:03.899 00:07:03.899 Power Management 00:07:03.899 ================ 00:07:03.899 Number of Power States: 1 00:07:03.899 Current Power State: Power State #0 00:07:03.899 Power State #0: 00:07:03.899 Max Power: 25.00 W 00:07:03.899 Non-Operational State: Operational 00:07:03.899 Entry Latency: 16 microseconds 00:07:03.899 Exit Latency: 4 microseconds 00:07:03.899 Relative Read Throughput: 0 00:07:03.899 Relative Read Latency: 0 00:07:03.899 Relative Write Throughput: 0 00:07:03.899 Relative Write Latency: 0 00:07:03.899 Idle Power: Not Reported 00:07:03.899 Active Power: Not Reported 00:07:03.899 Non-Operational Permissive Mode: Not Supported 00:07:03.899 00:07:03.899 Health Information 00:07:03.899 ================== 00:07:03.899 Critical Warnings: 00:07:03.899 Available Spare Space: OK 00:07:03.899 Temperature: OK 00:07:03.899 Device Reliability: OK 00:07:03.899 Read Only: No 00:07:03.899 Volatile Memory Backup: OK 00:07:03.899 Current Temperature: 323 Kelvin (50 Celsius) 00:07:03.899 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:03.899 Available Spare: 0% 00:07:03.899 Available Spare Threshold: 0% 00:07:03.899 Life Percentage Used: 0% 00:07:03.899 Data Units Read: 2185 00:07:03.899 Data Units Written: 1972 00:07:03.899 Host Read Commands: 116308 00:07:03.899 Host Write Commands: 114577 00:07:03.899 Controller Busy Time: 0 minutes 00:07:03.899 Power Cycles: 0 00:07:03.899 Power On Hours: 0 hours 00:07:03.899 Unsafe Shutdowns: 0 00:07:03.899 Unrecoverable Media Errors: 0 00:07:03.899 Lifetime Error Log Entries: 0 00:07:03.899 Warning Temperature Time: 0 minutes 00:07:03.899 Critical Temperature Time: 0 minutes 00:07:03.899 00:07:03.899 Number of Queues 00:07:03.899 ================ 00:07:03.899 Number of I/O Submission Queues: 64 00:07:03.899 Number of I/O Completion Queues: 64 00:07:03.899 00:07:03.899 ZNS Specific Controller Data 00:07:03.899 ============================ 00:07:03.899 Zone Append Size Limit: 0 00:07:03.899 00:07:03.899 00:07:03.899 Active Namespaces 00:07:03.899 ================= 00:07:03.899 Namespace ID:1 00:07:03.899 Error Recovery Timeout: Unlimited 00:07:03.899 Command Set Identifier: NVM (00h) 00:07:03.899 Deallocate: Supported 00:07:03.899 Deallocated/Unwritten Error: Supported 00:07:03.899 Deallocated Read Value: All 0x00 00:07:03.899 Deallocate in Write Zeroes: Not Supported 00:07:03.899 Deallocated Guard Field: 0xFFFF 00:07:03.899 Flush: Supported 00:07:03.899 Reservation: Not Supported 00:07:03.899 Namespace Sharing Capabilities: Private 00:07:03.899 Size (in LBAs): 1048576 (4GiB) 00:07:03.899 Capacity (in LBAs): 1048576 (4GiB) 00:07:03.899 Utilization (in LBAs): 1048576 (4GiB) 00:07:03.899 Thin Provisioning: Not Supported 00:07:03.899 Per-NS Atomic Units: No 00:07:03.899 Maximum Single Source Range Length: 128 00:07:03.899 Maximum Copy Length: 128 00:07:03.899 Maximum Source Range Count: 128 00:07:03.899 NGUID/EUI64 Never Reused: No 00:07:03.899 Namespace Write Protected: No 00:07:03.899 Number of LBA Formats: 8 00:07:03.899 Current LBA Format: LBA Format #04 00:07:03.899 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:03.899 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:03.899 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:03.899 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:03.899 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:03.899 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:03.899 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:03.899 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:03.899 00:07:03.899 NVM Specific Namespace Data 00:07:03.899 =========================== 00:07:03.899 Logical Block Storage Tag Mask: 0 00:07:03.899 Protection Information Capabilities: 00:07:03.899 16b Guard Protection Information Storage Tag Support: No 00:07:03.899 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:03.899 Storage Tag Check Read Support: No 00:07:03.899 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.899 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.899 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.899 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.899 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.899 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.899 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.899 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.899 Namespace ID:2 00:07:03.899 Error Recovery Timeout: Unlimited 00:07:03.899 Command Set Identifier: NVM (00h) 00:07:03.899 Deallocate: Supported 00:07:03.899 Deallocated/Unwritten Error: Supported 00:07:03.899 Deallocated Read Value: All 0x00 00:07:03.899 Deallocate in Write Zeroes: Not Supported 00:07:03.899 Deallocated Guard Field: 0xFFFF 00:07:03.899 Flush: Supported 00:07:03.899 Reservation: Not Supported 00:07:03.899 Namespace Sharing Capabilities: Private 00:07:03.899 Size (in LBAs): 1048576 (4GiB) 00:07:03.899 Capacity (in LBAs): 1048576 (4GiB) 00:07:03.899 Utilization (in LBAs): 1048576 (4GiB) 00:07:03.899 Thin Provisioning: Not Supported 00:07:03.899 Per-NS Atomic Units: No 00:07:03.899 Maximum Single Source Range Length: 128 00:07:03.899 Maximum Copy Length: 128 00:07:03.899 Maximum Source Range Count: 128 00:07:03.899 NGUID/EUI64 Never Reused: No 00:07:03.899 Namespace Write Protected: No 00:07:03.899 Number of LBA Formats: 8 00:07:03.899 Current LBA Format: LBA Format #04 00:07:03.899 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:03.899 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:03.899 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:03.899 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:03.899 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:03.899 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:03.899 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:03.899 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:03.899 00:07:03.899 NVM Specific Namespace Data 00:07:03.899 =========================== 00:07:03.899 Logical Block Storage Tag Mask: 0 00:07:03.900 Protection Information Capabilities: 00:07:03.900 16b Guard Protection Information Storage Tag Support: No 00:07:03.900 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:03.900 Storage Tag Check Read Support: No 00:07:03.900 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 Namespace ID:3 00:07:03.900 Error Recovery Timeout: Unlimited 00:07:03.900 Command Set Identifier: NVM (00h) 00:07:03.900 Deallocate: Supported 00:07:03.900 Deallocated/Unwritten Error: Supported 00:07:03.900 Deallocated Read Value: All 0x00 00:07:03.900 Deallocate in Write Zeroes: Not Supported 00:07:03.900 Deallocated Guard Field: 0xFFFF 00:07:03.900 Flush: Supported 00:07:03.900 Reservation: Not Supported 00:07:03.900 Namespace Sharing Capabilities: Private 00:07:03.900 Size (in LBAs): 1048576 (4GiB) 00:07:03.900 Capacity (in LBAs): 1048576 (4GiB) 00:07:03.900 Utilization (in LBAs): 1048576 (4GiB) 00:07:03.900 Thin Provisioning: Not Supported 00:07:03.900 Per-NS Atomic Units: No 00:07:03.900 Maximum Single Source Range Length: 128 00:07:03.900 Maximum Copy Length: 128 00:07:03.900 Maximum Source Range Count: 128 00:07:03.900 NGUID/EUI64 Never Reused: No 00:07:03.900 Namespace Write Protected: No 00:07:03.900 Number of LBA Formats: 8 00:07:03.900 Current LBA Format: LBA Format #04 00:07:03.900 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:03.900 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:03.900 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:03.900 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:03.900 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:03.900 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:03.900 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:03.900 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:03.900 00:07:03.900 NVM Specific Namespace Data 00:07:03.900 =========================== 00:07:03.900 Logical Block Storage Tag Mask: 0 00:07:03.900 Protection Information Capabilities: 00:07:03.900 16b Guard Protection Information Storage Tag Support: No 00:07:03.900 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:03.900 Storage Tag Check Read Support: No 00:07:03.900 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:03.900 02:54:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:03.900 02:54:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:04.158 ===================================================== 00:07:04.158 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:04.158 ===================================================== 00:07:04.158 Controller Capabilities/Features 00:07:04.158 ================================ 00:07:04.158 Vendor ID: 1b36 00:07:04.158 Subsystem Vendor ID: 1af4 00:07:04.158 Serial Number: 12340 00:07:04.158 Model Number: QEMU NVMe Ctrl 00:07:04.158 Firmware Version: 8.0.0 00:07:04.158 Recommended Arb Burst: 6 00:07:04.158 IEEE OUI Identifier: 00 54 52 00:07:04.158 Multi-path I/O 00:07:04.158 May have multiple subsystem ports: No 00:07:04.158 May have multiple controllers: No 00:07:04.158 Associated with SR-IOV VF: No 00:07:04.158 Max Data Transfer Size: 524288 00:07:04.158 Max Number of Namespaces: 256 00:07:04.158 Max Number of I/O Queues: 64 00:07:04.158 NVMe Specification Version (VS): 1.4 00:07:04.158 NVMe Specification Version (Identify): 1.4 00:07:04.158 Maximum Queue Entries: 2048 00:07:04.158 Contiguous Queues Required: Yes 00:07:04.158 Arbitration Mechanisms Supported 00:07:04.158 Weighted Round Robin: Not Supported 00:07:04.158 Vendor Specific: Not Supported 00:07:04.158 Reset Timeout: 7500 ms 00:07:04.158 Doorbell Stride: 4 bytes 00:07:04.158 NVM Subsystem Reset: Not Supported 00:07:04.158 Command Sets Supported 00:07:04.158 NVM Command Set: Supported 00:07:04.158 Boot Partition: Not Supported 00:07:04.158 Memory Page Size Minimum: 4096 bytes 00:07:04.158 Memory Page Size Maximum: 65536 bytes 00:07:04.158 Persistent Memory Region: Not Supported 00:07:04.158 Optional Asynchronous Events Supported 00:07:04.158 Namespace Attribute Notices: Supported 00:07:04.158 Firmware Activation Notices: Not Supported 00:07:04.158 ANA Change Notices: Not Supported 00:07:04.158 PLE Aggregate Log Change Notices: Not Supported 00:07:04.158 LBA Status Info Alert Notices: Not Supported 00:07:04.158 EGE Aggregate Log Change Notices: Not Supported 00:07:04.158 Normal NVM Subsystem Shutdown event: Not Supported 00:07:04.158 Zone Descriptor Change Notices: Not Supported 00:07:04.158 Discovery Log Change Notices: Not Supported 00:07:04.158 Controller Attributes 00:07:04.158 128-bit Host Identifier: Not Supported 00:07:04.158 Non-Operational Permissive Mode: Not Supported 00:07:04.158 NVM Sets: Not Supported 00:07:04.158 Read Recovery Levels: Not Supported 00:07:04.158 Endurance Groups: Not Supported 00:07:04.158 Predictable Latency Mode: Not Supported 00:07:04.158 Traffic Based Keep ALive: Not Supported 00:07:04.158 Namespace Granularity: Not Supported 00:07:04.158 SQ Associations: Not Supported 00:07:04.158 UUID List: Not Supported 00:07:04.158 Multi-Domain Subsystem: Not Supported 00:07:04.158 Fixed Capacity Management: Not Supported 00:07:04.158 Variable Capacity Management: Not Supported 00:07:04.158 Delete Endurance Group: Not Supported 00:07:04.158 Delete NVM Set: Not Supported 00:07:04.158 Extended LBA Formats Supported: Supported 00:07:04.158 Flexible Data Placement Supported: Not Supported 00:07:04.158 00:07:04.158 Controller Memory Buffer Support 00:07:04.158 ================================ 00:07:04.158 Supported: No 00:07:04.159 00:07:04.159 Persistent Memory Region Support 00:07:04.159 ================================ 00:07:04.159 Supported: No 00:07:04.159 00:07:04.159 Admin Command Set Attributes 00:07:04.159 ============================ 00:07:04.159 Security Send/Receive: Not Supported 00:07:04.159 Format NVM: Supported 00:07:04.159 Firmware Activate/Download: Not Supported 00:07:04.159 Namespace Management: Supported 00:07:04.159 Device Self-Test: Not Supported 00:07:04.159 Directives: Supported 00:07:04.159 NVMe-MI: Not Supported 00:07:04.159 Virtualization Management: Not Supported 00:07:04.159 Doorbell Buffer Config: Supported 00:07:04.159 Get LBA Status Capability: Not Supported 00:07:04.159 Command & Feature Lockdown Capability: Not Supported 00:07:04.159 Abort Command Limit: 4 00:07:04.159 Async Event Request Limit: 4 00:07:04.159 Number of Firmware Slots: N/A 00:07:04.159 Firmware Slot 1 Read-Only: N/A 00:07:04.159 Firmware Activation Without Reset: N/A 00:07:04.159 Multiple Update Detection Support: N/A 00:07:04.159 Firmware Update Granularity: No Information Provided 00:07:04.159 Per-Namespace SMART Log: Yes 00:07:04.159 Asymmetric Namespace Access Log Page: Not Supported 00:07:04.159 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:04.159 Command Effects Log Page: Supported 00:07:04.159 Get Log Page Extended Data: Supported 00:07:04.159 Telemetry Log Pages: Not Supported 00:07:04.159 Persistent Event Log Pages: Not Supported 00:07:04.159 Supported Log Pages Log Page: May Support 00:07:04.159 Commands Supported & Effects Log Page: Not Supported 00:07:04.159 Feature Identifiers & Effects Log Page:May Support 00:07:04.159 NVMe-MI Commands & Effects Log Page: May Support 00:07:04.159 Data Area 4 for Telemetry Log: Not Supported 00:07:04.159 Error Log Page Entries Supported: 1 00:07:04.159 Keep Alive: Not Supported 00:07:04.159 00:07:04.159 NVM Command Set Attributes 00:07:04.159 ========================== 00:07:04.159 Submission Queue Entry Size 00:07:04.159 Max: 64 00:07:04.159 Min: 64 00:07:04.159 Completion Queue Entry Size 00:07:04.159 Max: 16 00:07:04.159 Min: 16 00:07:04.159 Number of Namespaces: 256 00:07:04.159 Compare Command: Supported 00:07:04.159 Write Uncorrectable Command: Not Supported 00:07:04.159 Dataset Management Command: Supported 00:07:04.159 Write Zeroes Command: Supported 00:07:04.159 Set Features Save Field: Supported 00:07:04.159 Reservations: Not Supported 00:07:04.159 Timestamp: Supported 00:07:04.159 Copy: Supported 00:07:04.159 Volatile Write Cache: Present 00:07:04.159 Atomic Write Unit (Normal): 1 00:07:04.159 Atomic Write Unit (PFail): 1 00:07:04.159 Atomic Compare & Write Unit: 1 00:07:04.159 Fused Compare & Write: Not Supported 00:07:04.159 Scatter-Gather List 00:07:04.159 SGL Command Set: Supported 00:07:04.159 SGL Keyed: Not Supported 00:07:04.159 SGL Bit Bucket Descriptor: Not Supported 00:07:04.159 SGL Metadata Pointer: Not Supported 00:07:04.159 Oversized SGL: Not Supported 00:07:04.159 SGL Metadata Address: Not Supported 00:07:04.159 SGL Offset: Not Supported 00:07:04.159 Transport SGL Data Block: Not Supported 00:07:04.159 Replay Protected Memory Block: Not Supported 00:07:04.159 00:07:04.159 Firmware Slot Information 00:07:04.159 ========================= 00:07:04.159 Active slot: 1 00:07:04.159 Slot 1 Firmware Revision: 1.0 00:07:04.159 00:07:04.159 00:07:04.159 Commands Supported and Effects 00:07:04.159 ============================== 00:07:04.159 Admin Commands 00:07:04.159 -------------- 00:07:04.159 Delete I/O Submission Queue (00h): Supported 00:07:04.159 Create I/O Submission Queue (01h): Supported 00:07:04.159 Get Log Page (02h): Supported 00:07:04.159 Delete I/O Completion Queue (04h): Supported 00:07:04.159 Create I/O Completion Queue (05h): Supported 00:07:04.159 Identify (06h): Supported 00:07:04.159 Abort (08h): Supported 00:07:04.159 Set Features (09h): Supported 00:07:04.159 Get Features (0Ah): Supported 00:07:04.159 Asynchronous Event Request (0Ch): Supported 00:07:04.159 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:04.159 Directive Send (19h): Supported 00:07:04.159 Directive Receive (1Ah): Supported 00:07:04.159 Virtualization Management (1Ch): Supported 00:07:04.159 Doorbell Buffer Config (7Ch): Supported 00:07:04.159 Format NVM (80h): Supported LBA-Change 00:07:04.159 I/O Commands 00:07:04.159 ------------ 00:07:04.159 Flush (00h): Supported LBA-Change 00:07:04.159 Write (01h): Supported LBA-Change 00:07:04.159 Read (02h): Supported 00:07:04.159 Compare (05h): Supported 00:07:04.159 Write Zeroes (08h): Supported LBA-Change 00:07:04.159 Dataset Management (09h): Supported LBA-Change 00:07:04.159 Unknown (0Ch): Supported 00:07:04.159 Unknown (12h): Supported 00:07:04.159 Copy (19h): Supported LBA-Change 00:07:04.159 Unknown (1Dh): Supported LBA-Change 00:07:04.159 00:07:04.159 Error Log 00:07:04.159 ========= 00:07:04.159 00:07:04.159 Arbitration 00:07:04.159 =========== 00:07:04.159 Arbitration Burst: no limit 00:07:04.159 00:07:04.159 Power Management 00:07:04.159 ================ 00:07:04.159 Number of Power States: 1 00:07:04.159 Current Power State: Power State #0 00:07:04.159 Power State #0: 00:07:04.159 Max Power: 25.00 W 00:07:04.159 Non-Operational State: Operational 00:07:04.159 Entry Latency: 16 microseconds 00:07:04.159 Exit Latency: 4 microseconds 00:07:04.159 Relative Read Throughput: 0 00:07:04.159 Relative Read Latency: 0 00:07:04.159 Relative Write Throughput: 0 00:07:04.159 Relative Write Latency: 0 00:07:04.159 Idle Power: Not Reported 00:07:04.159 Active Power: Not Reported 00:07:04.159 Non-Operational Permissive Mode: Not Supported 00:07:04.159 00:07:04.159 Health Information 00:07:04.159 ================== 00:07:04.159 Critical Warnings: 00:07:04.159 Available Spare Space: OK 00:07:04.159 Temperature: OK 00:07:04.159 Device Reliability: OK 00:07:04.159 Read Only: No 00:07:04.159 Volatile Memory Backup: OK 00:07:04.159 Current Temperature: 323 Kelvin (50 Celsius) 00:07:04.159 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:04.159 Available Spare: 0% 00:07:04.159 Available Spare Threshold: 0% 00:07:04.159 Life Percentage Used: 0% 00:07:04.159 Data Units Read: 673 00:07:04.159 Data Units Written: 601 00:07:04.159 Host Read Commands: 38036 00:07:04.159 Host Write Commands: 37822 00:07:04.159 Controller Busy Time: 0 minutes 00:07:04.159 Power Cycles: 0 00:07:04.159 Power On Hours: 0 hours 00:07:04.159 Unsafe Shutdowns: 0 00:07:04.159 Unrecoverable Media Errors: 0 00:07:04.159 Lifetime Error Log Entries: 0 00:07:04.159 Warning Temperature Time: 0 minutes 00:07:04.159 Critical Temperature Time: 0 minutes 00:07:04.159 00:07:04.159 Number of Queues 00:07:04.159 ================ 00:07:04.159 Number of I/O Submission Queues: 64 00:07:04.159 Number of I/O Completion Queues: 64 00:07:04.159 00:07:04.159 ZNS Specific Controller Data 00:07:04.159 ============================ 00:07:04.159 Zone Append Size Limit: 0 00:07:04.159 00:07:04.159 00:07:04.159 Active Namespaces 00:07:04.159 ================= 00:07:04.159 Namespace ID:1 00:07:04.159 Error Recovery Timeout: Unlimited 00:07:04.159 Command Set Identifier: NVM (00h) 00:07:04.159 Deallocate: Supported 00:07:04.159 Deallocated/Unwritten Error: Supported 00:07:04.159 Deallocated Read Value: All 0x00 00:07:04.159 Deallocate in Write Zeroes: Not Supported 00:07:04.159 Deallocated Guard Field: 0xFFFF 00:07:04.159 Flush: Supported 00:07:04.159 Reservation: Not Supported 00:07:04.159 Metadata Transferred as: Separate Metadata Buffer 00:07:04.159 Namespace Sharing Capabilities: Private 00:07:04.159 Size (in LBAs): 1548666 (5GiB) 00:07:04.159 Capacity (in LBAs): 1548666 (5GiB) 00:07:04.159 Utilization (in LBAs): 1548666 (5GiB) 00:07:04.159 Thin Provisioning: Not Supported 00:07:04.159 Per-NS Atomic Units: No 00:07:04.159 Maximum Single Source Range Length: 128 00:07:04.159 Maximum Copy Length: 128 00:07:04.159 Maximum Source Range Count: 128 00:07:04.159 NGUID/EUI64 Never Reused: No 00:07:04.159 Namespace Write Protected: No 00:07:04.159 Number of LBA Formats: 8 00:07:04.159 Current LBA Format: LBA Format #07 00:07:04.159 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:04.159 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:04.159 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:04.159 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:04.159 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:04.159 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:04.159 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:04.159 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:04.159 00:07:04.159 NVM Specific Namespace Data 00:07:04.159 =========================== 00:07:04.159 Logical Block Storage Tag Mask: 0 00:07:04.159 Protection Information Capabilities: 00:07:04.159 16b Guard Protection Information Storage Tag Support: No 00:07:04.159 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:04.159 Storage Tag Check Read Support: No 00:07:04.159 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.159 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.160 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.160 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.160 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.160 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.160 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.160 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.160 02:54:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:04.160 02:54:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:04.491 ===================================================== 00:07:04.491 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:04.491 ===================================================== 00:07:04.491 Controller Capabilities/Features 00:07:04.491 ================================ 00:07:04.491 Vendor ID: 1b36 00:07:04.491 Subsystem Vendor ID: 1af4 00:07:04.491 Serial Number: 12341 00:07:04.491 Model Number: QEMU NVMe Ctrl 00:07:04.491 Firmware Version: 8.0.0 00:07:04.491 Recommended Arb Burst: 6 00:07:04.491 IEEE OUI Identifier: 00 54 52 00:07:04.491 Multi-path I/O 00:07:04.491 May have multiple subsystem ports: No 00:07:04.491 May have multiple controllers: No 00:07:04.491 Associated with SR-IOV VF: No 00:07:04.491 Max Data Transfer Size: 524288 00:07:04.491 Max Number of Namespaces: 256 00:07:04.491 Max Number of I/O Queues: 64 00:07:04.491 NVMe Specification Version (VS): 1.4 00:07:04.491 NVMe Specification Version (Identify): 1.4 00:07:04.491 Maximum Queue Entries: 2048 00:07:04.491 Contiguous Queues Required: Yes 00:07:04.491 Arbitration Mechanisms Supported 00:07:04.491 Weighted Round Robin: Not Supported 00:07:04.491 Vendor Specific: Not Supported 00:07:04.491 Reset Timeout: 7500 ms 00:07:04.491 Doorbell Stride: 4 bytes 00:07:04.491 NVM Subsystem Reset: Not Supported 00:07:04.491 Command Sets Supported 00:07:04.491 NVM Command Set: Supported 00:07:04.491 Boot Partition: Not Supported 00:07:04.491 Memory Page Size Minimum: 4096 bytes 00:07:04.491 Memory Page Size Maximum: 65536 bytes 00:07:04.491 Persistent Memory Region: Not Supported 00:07:04.491 Optional Asynchronous Events Supported 00:07:04.491 Namespace Attribute Notices: Supported 00:07:04.491 Firmware Activation Notices: Not Supported 00:07:04.491 ANA Change Notices: Not Supported 00:07:04.491 PLE Aggregate Log Change Notices: Not Supported 00:07:04.491 LBA Status Info Alert Notices: Not Supported 00:07:04.491 EGE Aggregate Log Change Notices: Not Supported 00:07:04.491 Normal NVM Subsystem Shutdown event: Not Supported 00:07:04.491 Zone Descriptor Change Notices: Not Supported 00:07:04.491 Discovery Log Change Notices: Not Supported 00:07:04.491 Controller Attributes 00:07:04.491 128-bit Host Identifier: Not Supported 00:07:04.491 Non-Operational Permissive Mode: Not Supported 00:07:04.491 NVM Sets: Not Supported 00:07:04.491 Read Recovery Levels: Not Supported 00:07:04.491 Endurance Groups: Not Supported 00:07:04.491 Predictable Latency Mode: Not Supported 00:07:04.491 Traffic Based Keep ALive: Not Supported 00:07:04.491 Namespace Granularity: Not Supported 00:07:04.491 SQ Associations: Not Supported 00:07:04.491 UUID List: Not Supported 00:07:04.491 Multi-Domain Subsystem: Not Supported 00:07:04.491 Fixed Capacity Management: Not Supported 00:07:04.491 Variable Capacity Management: Not Supported 00:07:04.491 Delete Endurance Group: Not Supported 00:07:04.491 Delete NVM Set: Not Supported 00:07:04.491 Extended LBA Formats Supported: Supported 00:07:04.491 Flexible Data Placement Supported: Not Supported 00:07:04.491 00:07:04.491 Controller Memory Buffer Support 00:07:04.491 ================================ 00:07:04.491 Supported: No 00:07:04.491 00:07:04.491 Persistent Memory Region Support 00:07:04.491 ================================ 00:07:04.491 Supported: No 00:07:04.491 00:07:04.491 Admin Command Set Attributes 00:07:04.491 ============================ 00:07:04.491 Security Send/Receive: Not Supported 00:07:04.491 Format NVM: Supported 00:07:04.491 Firmware Activate/Download: Not Supported 00:07:04.491 Namespace Management: Supported 00:07:04.491 Device Self-Test: Not Supported 00:07:04.491 Directives: Supported 00:07:04.491 NVMe-MI: Not Supported 00:07:04.491 Virtualization Management: Not Supported 00:07:04.491 Doorbell Buffer Config: Supported 00:07:04.491 Get LBA Status Capability: Not Supported 00:07:04.491 Command & Feature Lockdown Capability: Not Supported 00:07:04.491 Abort Command Limit: 4 00:07:04.491 Async Event Request Limit: 4 00:07:04.491 Number of Firmware Slots: N/A 00:07:04.491 Firmware Slot 1 Read-Only: N/A 00:07:04.491 Firmware Activation Without Reset: N/A 00:07:04.491 Multiple Update Detection Support: N/A 00:07:04.491 Firmware Update Granularity: No Information Provided 00:07:04.491 Per-Namespace SMART Log: Yes 00:07:04.491 Asymmetric Namespace Access Log Page: Not Supported 00:07:04.491 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:04.491 Command Effects Log Page: Supported 00:07:04.491 Get Log Page Extended Data: Supported 00:07:04.491 Telemetry Log Pages: Not Supported 00:07:04.491 Persistent Event Log Pages: Not Supported 00:07:04.491 Supported Log Pages Log Page: May Support 00:07:04.491 Commands Supported & Effects Log Page: Not Supported 00:07:04.491 Feature Identifiers & Effects Log Page:May Support 00:07:04.491 NVMe-MI Commands & Effects Log Page: May Support 00:07:04.491 Data Area 4 for Telemetry Log: Not Supported 00:07:04.491 Error Log Page Entries Supported: 1 00:07:04.491 Keep Alive: Not Supported 00:07:04.491 00:07:04.491 NVM Command Set Attributes 00:07:04.491 ========================== 00:07:04.491 Submission Queue Entry Size 00:07:04.491 Max: 64 00:07:04.491 Min: 64 00:07:04.491 Completion Queue Entry Size 00:07:04.491 Max: 16 00:07:04.491 Min: 16 00:07:04.491 Number of Namespaces: 256 00:07:04.491 Compare Command: Supported 00:07:04.491 Write Uncorrectable Command: Not Supported 00:07:04.491 Dataset Management Command: Supported 00:07:04.491 Write Zeroes Command: Supported 00:07:04.491 Set Features Save Field: Supported 00:07:04.491 Reservations: Not Supported 00:07:04.491 Timestamp: Supported 00:07:04.491 Copy: Supported 00:07:04.491 Volatile Write Cache: Present 00:07:04.491 Atomic Write Unit (Normal): 1 00:07:04.491 Atomic Write Unit (PFail): 1 00:07:04.491 Atomic Compare & Write Unit: 1 00:07:04.491 Fused Compare & Write: Not Supported 00:07:04.491 Scatter-Gather List 00:07:04.491 SGL Command Set: Supported 00:07:04.491 SGL Keyed: Not Supported 00:07:04.491 SGL Bit Bucket Descriptor: Not Supported 00:07:04.491 SGL Metadata Pointer: Not Supported 00:07:04.491 Oversized SGL: Not Supported 00:07:04.491 SGL Metadata Address: Not Supported 00:07:04.491 SGL Offset: Not Supported 00:07:04.491 Transport SGL Data Block: Not Supported 00:07:04.491 Replay Protected Memory Block: Not Supported 00:07:04.491 00:07:04.491 Firmware Slot Information 00:07:04.491 ========================= 00:07:04.491 Active slot: 1 00:07:04.491 Slot 1 Firmware Revision: 1.0 00:07:04.491 00:07:04.491 00:07:04.491 Commands Supported and Effects 00:07:04.491 ============================== 00:07:04.491 Admin Commands 00:07:04.491 -------------- 00:07:04.491 Delete I/O Submission Queue (00h): Supported 00:07:04.491 Create I/O Submission Queue (01h): Supported 00:07:04.491 Get Log Page (02h): Supported 00:07:04.491 Delete I/O Completion Queue (04h): Supported 00:07:04.491 Create I/O Completion Queue (05h): Supported 00:07:04.491 Identify (06h): Supported 00:07:04.491 Abort (08h): Supported 00:07:04.491 Set Features (09h): Supported 00:07:04.491 Get Features (0Ah): Supported 00:07:04.491 Asynchronous Event Request (0Ch): Supported 00:07:04.491 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:04.491 Directive Send (19h): Supported 00:07:04.491 Directive Receive (1Ah): Supported 00:07:04.491 Virtualization Management (1Ch): Supported 00:07:04.491 Doorbell Buffer Config (7Ch): Supported 00:07:04.491 Format NVM (80h): Supported LBA-Change 00:07:04.491 I/O Commands 00:07:04.491 ------------ 00:07:04.491 Flush (00h): Supported LBA-Change 00:07:04.491 Write (01h): Supported LBA-Change 00:07:04.491 Read (02h): Supported 00:07:04.491 Compare (05h): Supported 00:07:04.491 Write Zeroes (08h): Supported LBA-Change 00:07:04.491 Dataset Management (09h): Supported LBA-Change 00:07:04.491 Unknown (0Ch): Supported 00:07:04.491 Unknown (12h): Supported 00:07:04.491 Copy (19h): Supported LBA-Change 00:07:04.491 Unknown (1Dh): Supported LBA-Change 00:07:04.491 00:07:04.491 Error Log 00:07:04.491 ========= 00:07:04.491 00:07:04.491 Arbitration 00:07:04.491 =========== 00:07:04.491 Arbitration Burst: no limit 00:07:04.491 00:07:04.491 Power Management 00:07:04.491 ================ 00:07:04.491 Number of Power States: 1 00:07:04.491 Current Power State: Power State #0 00:07:04.491 Power State #0: 00:07:04.491 Max Power: 25.00 W 00:07:04.491 Non-Operational State: Operational 00:07:04.491 Entry Latency: 16 microseconds 00:07:04.491 Exit Latency: 4 microseconds 00:07:04.491 Relative Read Throughput: 0 00:07:04.491 Relative Read Latency: 0 00:07:04.491 Relative Write Throughput: 0 00:07:04.491 Relative Write Latency: 0 00:07:04.492 Idle Power: Not Reported 00:07:04.492 Active Power: Not Reported 00:07:04.492 Non-Operational Permissive Mode: Not Supported 00:07:04.492 00:07:04.492 Health Information 00:07:04.492 ================== 00:07:04.492 Critical Warnings: 00:07:04.492 Available Spare Space: OK 00:07:04.492 Temperature: OK 00:07:04.492 Device Reliability: OK 00:07:04.492 Read Only: No 00:07:04.492 Volatile Memory Backup: OK 00:07:04.492 Current Temperature: 323 Kelvin (50 Celsius) 00:07:04.492 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:04.492 Available Spare: 0% 00:07:04.492 Available Spare Threshold: 0% 00:07:04.492 Life Percentage Used: 0% 00:07:04.492 Data Units Read: 1009 00:07:04.492 Data Units Written: 869 00:07:04.492 Host Read Commands: 55390 00:07:04.492 Host Write Commands: 54081 00:07:04.492 Controller Busy Time: 0 minutes 00:07:04.492 Power Cycles: 0 00:07:04.492 Power On Hours: 0 hours 00:07:04.492 Unsafe Shutdowns: 0 00:07:04.492 Unrecoverable Media Errors: 0 00:07:04.492 Lifetime Error Log Entries: 0 00:07:04.492 Warning Temperature Time: 0 minutes 00:07:04.492 Critical Temperature Time: 0 minutes 00:07:04.492 00:07:04.492 Number of Queues 00:07:04.492 ================ 00:07:04.492 Number of I/O Submission Queues: 64 00:07:04.492 Number of I/O Completion Queues: 64 00:07:04.492 00:07:04.492 ZNS Specific Controller Data 00:07:04.492 ============================ 00:07:04.492 Zone Append Size Limit: 0 00:07:04.492 00:07:04.492 00:07:04.492 Active Namespaces 00:07:04.492 ================= 00:07:04.492 Namespace ID:1 00:07:04.492 Error Recovery Timeout: Unlimited 00:07:04.492 Command Set Identifier: NVM (00h) 00:07:04.492 Deallocate: Supported 00:07:04.492 Deallocated/Unwritten Error: Supported 00:07:04.492 Deallocated Read Value: All 0x00 00:07:04.492 Deallocate in Write Zeroes: Not Supported 00:07:04.492 Deallocated Guard Field: 0xFFFF 00:07:04.492 Flush: Supported 00:07:04.492 Reservation: Not Supported 00:07:04.492 Namespace Sharing Capabilities: Private 00:07:04.492 Size (in LBAs): 1310720 (5GiB) 00:07:04.492 Capacity (in LBAs): 1310720 (5GiB) 00:07:04.492 Utilization (in LBAs): 1310720 (5GiB) 00:07:04.492 Thin Provisioning: Not Supported 00:07:04.492 Per-NS Atomic Units: No 00:07:04.492 Maximum Single Source Range Length: 128 00:07:04.492 Maximum Copy Length: 128 00:07:04.492 Maximum Source Range Count: 128 00:07:04.492 NGUID/EUI64 Never Reused: No 00:07:04.492 Namespace Write Protected: No 00:07:04.492 Number of LBA Formats: 8 00:07:04.492 Current LBA Format: LBA Format #04 00:07:04.492 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:04.492 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:04.492 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:04.492 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:04.492 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:04.492 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:04.492 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:04.492 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:04.492 00:07:04.492 NVM Specific Namespace Data 00:07:04.492 =========================== 00:07:04.492 Logical Block Storage Tag Mask: 0 00:07:04.492 Protection Information Capabilities: 00:07:04.492 16b Guard Protection Information Storage Tag Support: No 00:07:04.492 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:04.492 Storage Tag Check Read Support: No 00:07:04.492 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.492 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.492 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.492 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.492 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.492 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.492 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.492 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.492 02:54:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:04.492 02:54:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:04.752 ===================================================== 00:07:04.752 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:04.752 ===================================================== 00:07:04.752 Controller Capabilities/Features 00:07:04.752 ================================ 00:07:04.752 Vendor ID: 1b36 00:07:04.752 Subsystem Vendor ID: 1af4 00:07:04.752 Serial Number: 12342 00:07:04.752 Model Number: QEMU NVMe Ctrl 00:07:04.752 Firmware Version: 8.0.0 00:07:04.752 Recommended Arb Burst: 6 00:07:04.752 IEEE OUI Identifier: 00 54 52 00:07:04.752 Multi-path I/O 00:07:04.752 May have multiple subsystem ports: No 00:07:04.752 May have multiple controllers: No 00:07:04.752 Associated with SR-IOV VF: No 00:07:04.752 Max Data Transfer Size: 524288 00:07:04.752 Max Number of Namespaces: 256 00:07:04.752 Max Number of I/O Queues: 64 00:07:04.752 NVMe Specification Version (VS): 1.4 00:07:04.752 NVMe Specification Version (Identify): 1.4 00:07:04.752 Maximum Queue Entries: 2048 00:07:04.752 Contiguous Queues Required: Yes 00:07:04.752 Arbitration Mechanisms Supported 00:07:04.752 Weighted Round Robin: Not Supported 00:07:04.752 Vendor Specific: Not Supported 00:07:04.752 Reset Timeout: 7500 ms 00:07:04.752 Doorbell Stride: 4 bytes 00:07:04.752 NVM Subsystem Reset: Not Supported 00:07:04.752 Command Sets Supported 00:07:04.752 NVM Command Set: Supported 00:07:04.752 Boot Partition: Not Supported 00:07:04.752 Memory Page Size Minimum: 4096 bytes 00:07:04.752 Memory Page Size Maximum: 65536 bytes 00:07:04.752 Persistent Memory Region: Not Supported 00:07:04.752 Optional Asynchronous Events Supported 00:07:04.752 Namespace Attribute Notices: Supported 00:07:04.752 Firmware Activation Notices: Not Supported 00:07:04.752 ANA Change Notices: Not Supported 00:07:04.752 PLE Aggregate Log Change Notices: Not Supported 00:07:04.752 LBA Status Info Alert Notices: Not Supported 00:07:04.752 EGE Aggregate Log Change Notices: Not Supported 00:07:04.752 Normal NVM Subsystem Shutdown event: Not Supported 00:07:04.752 Zone Descriptor Change Notices: Not Supported 00:07:04.752 Discovery Log Change Notices: Not Supported 00:07:04.752 Controller Attributes 00:07:04.752 128-bit Host Identifier: Not Supported 00:07:04.752 Non-Operational Permissive Mode: Not Supported 00:07:04.752 NVM Sets: Not Supported 00:07:04.752 Read Recovery Levels: Not Supported 00:07:04.752 Endurance Groups: Not Supported 00:07:04.752 Predictable Latency Mode: Not Supported 00:07:04.752 Traffic Based Keep ALive: Not Supported 00:07:04.752 Namespace Granularity: Not Supported 00:07:04.752 SQ Associations: Not Supported 00:07:04.752 UUID List: Not Supported 00:07:04.752 Multi-Domain Subsystem: Not Supported 00:07:04.752 Fixed Capacity Management: Not Supported 00:07:04.752 Variable Capacity Management: Not Supported 00:07:04.752 Delete Endurance Group: Not Supported 00:07:04.752 Delete NVM Set: Not Supported 00:07:04.752 Extended LBA Formats Supported: Supported 00:07:04.752 Flexible Data Placement Supported: Not Supported 00:07:04.752 00:07:04.752 Controller Memory Buffer Support 00:07:04.752 ================================ 00:07:04.752 Supported: No 00:07:04.752 00:07:04.752 Persistent Memory Region Support 00:07:04.752 ================================ 00:07:04.752 Supported: No 00:07:04.752 00:07:04.752 Admin Command Set Attributes 00:07:04.752 ============================ 00:07:04.752 Security Send/Receive: Not Supported 00:07:04.752 Format NVM: Supported 00:07:04.752 Firmware Activate/Download: Not Supported 00:07:04.752 Namespace Management: Supported 00:07:04.752 Device Self-Test: Not Supported 00:07:04.752 Directives: Supported 00:07:04.752 NVMe-MI: Not Supported 00:07:04.752 Virtualization Management: Not Supported 00:07:04.752 Doorbell Buffer Config: Supported 00:07:04.752 Get LBA Status Capability: Not Supported 00:07:04.752 Command & Feature Lockdown Capability: Not Supported 00:07:04.752 Abort Command Limit: 4 00:07:04.752 Async Event Request Limit: 4 00:07:04.752 Number of Firmware Slots: N/A 00:07:04.752 Firmware Slot 1 Read-Only: N/A 00:07:04.752 Firmware Activation Without Reset: N/A 00:07:04.752 Multiple Update Detection Support: N/A 00:07:04.752 Firmware Update Granularity: No Information Provided 00:07:04.752 Per-Namespace SMART Log: Yes 00:07:04.752 Asymmetric Namespace Access Log Page: Not Supported 00:07:04.752 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:04.752 Command Effects Log Page: Supported 00:07:04.752 Get Log Page Extended Data: Supported 00:07:04.752 Telemetry Log Pages: Not Supported 00:07:04.752 Persistent Event Log Pages: Not Supported 00:07:04.752 Supported Log Pages Log Page: May Support 00:07:04.752 Commands Supported & Effects Log Page: Not Supported 00:07:04.752 Feature Identifiers & Effects Log Page:May Support 00:07:04.752 NVMe-MI Commands & Effects Log Page: May Support 00:07:04.752 Data Area 4 for Telemetry Log: Not Supported 00:07:04.752 Error Log Page Entries Supported: 1 00:07:04.752 Keep Alive: Not Supported 00:07:04.752 00:07:04.752 NVM Command Set Attributes 00:07:04.752 ========================== 00:07:04.752 Submission Queue Entry Size 00:07:04.752 Max: 64 00:07:04.752 Min: 64 00:07:04.752 Completion Queue Entry Size 00:07:04.752 Max: 16 00:07:04.752 Min: 16 00:07:04.752 Number of Namespaces: 256 00:07:04.752 Compare Command: Supported 00:07:04.752 Write Uncorrectable Command: Not Supported 00:07:04.752 Dataset Management Command: Supported 00:07:04.752 Write Zeroes Command: Supported 00:07:04.752 Set Features Save Field: Supported 00:07:04.752 Reservations: Not Supported 00:07:04.752 Timestamp: Supported 00:07:04.752 Copy: Supported 00:07:04.752 Volatile Write Cache: Present 00:07:04.752 Atomic Write Unit (Normal): 1 00:07:04.752 Atomic Write Unit (PFail): 1 00:07:04.752 Atomic Compare & Write Unit: 1 00:07:04.752 Fused Compare & Write: Not Supported 00:07:04.752 Scatter-Gather List 00:07:04.752 SGL Command Set: Supported 00:07:04.752 SGL Keyed: Not Supported 00:07:04.752 SGL Bit Bucket Descriptor: Not Supported 00:07:04.752 SGL Metadata Pointer: Not Supported 00:07:04.752 Oversized SGL: Not Supported 00:07:04.752 SGL Metadata Address: Not Supported 00:07:04.752 SGL Offset: Not Supported 00:07:04.752 Transport SGL Data Block: Not Supported 00:07:04.752 Replay Protected Memory Block: Not Supported 00:07:04.752 00:07:04.752 Firmware Slot Information 00:07:04.752 ========================= 00:07:04.752 Active slot: 1 00:07:04.752 Slot 1 Firmware Revision: 1.0 00:07:04.752 00:07:04.752 00:07:04.752 Commands Supported and Effects 00:07:04.752 ============================== 00:07:04.752 Admin Commands 00:07:04.752 -------------- 00:07:04.752 Delete I/O Submission Queue (00h): Supported 00:07:04.752 Create I/O Submission Queue (01h): Supported 00:07:04.752 Get Log Page (02h): Supported 00:07:04.752 Delete I/O Completion Queue (04h): Supported 00:07:04.752 Create I/O Completion Queue (05h): Supported 00:07:04.752 Identify (06h): Supported 00:07:04.752 Abort (08h): Supported 00:07:04.752 Set Features (09h): Supported 00:07:04.752 Get Features (0Ah): Supported 00:07:04.752 Asynchronous Event Request (0Ch): Supported 00:07:04.752 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:04.752 Directive Send (19h): Supported 00:07:04.752 Directive Receive (1Ah): Supported 00:07:04.752 Virtualization Management (1Ch): Supported 00:07:04.752 Doorbell Buffer Config (7Ch): Supported 00:07:04.753 Format NVM (80h): Supported LBA-Change 00:07:04.753 I/O Commands 00:07:04.753 ------------ 00:07:04.753 Flush (00h): Supported LBA-Change 00:07:04.753 Write (01h): Supported LBA-Change 00:07:04.753 Read (02h): Supported 00:07:04.753 Compare (05h): Supported 00:07:04.753 Write Zeroes (08h): Supported LBA-Change 00:07:04.753 Dataset Management (09h): Supported LBA-Change 00:07:04.753 Unknown (0Ch): Supported 00:07:04.753 Unknown (12h): Supported 00:07:04.753 Copy (19h): Supported LBA-Change 00:07:04.753 Unknown (1Dh): Supported LBA-Change 00:07:04.753 00:07:04.753 Error Log 00:07:04.753 ========= 00:07:04.753 00:07:04.753 Arbitration 00:07:04.753 =========== 00:07:04.753 Arbitration Burst: no limit 00:07:04.753 00:07:04.753 Power Management 00:07:04.753 ================ 00:07:04.753 Number of Power States: 1 00:07:04.753 Current Power State: Power State #0 00:07:04.753 Power State #0: 00:07:04.753 Max Power: 25.00 W 00:07:04.753 Non-Operational State: Operational 00:07:04.753 Entry Latency: 16 microseconds 00:07:04.753 Exit Latency: 4 microseconds 00:07:04.753 Relative Read Throughput: 0 00:07:04.753 Relative Read Latency: 0 00:07:04.753 Relative Write Throughput: 0 00:07:04.753 Relative Write Latency: 0 00:07:04.753 Idle Power: Not Reported 00:07:04.753 Active Power: Not Reported 00:07:04.753 Non-Operational Permissive Mode: Not Supported 00:07:04.753 00:07:04.753 Health Information 00:07:04.753 ================== 00:07:04.753 Critical Warnings: 00:07:04.753 Available Spare Space: OK 00:07:04.753 Temperature: OK 00:07:04.753 Device Reliability: OK 00:07:04.753 Read Only: No 00:07:04.753 Volatile Memory Backup: OK 00:07:04.753 Current Temperature: 323 Kelvin (50 Celsius) 00:07:04.753 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:04.753 Available Spare: 0% 00:07:04.753 Available Spare Threshold: 0% 00:07:04.753 Life Percentage Used: 0% 00:07:04.753 Data Units Read: 2185 00:07:04.753 Data Units Written: 1972 00:07:04.753 Host Read Commands: 116308 00:07:04.753 Host Write Commands: 114577 00:07:04.753 Controller Busy Time: 0 minutes 00:07:04.753 Power Cycles: 0 00:07:04.753 Power On Hours: 0 hours 00:07:04.753 Unsafe Shutdowns: 0 00:07:04.753 Unrecoverable Media Errors: 0 00:07:04.753 Lifetime Error Log Entries: 0 00:07:04.753 Warning Temperature Time: 0 minutes 00:07:04.753 Critical Temperature Time: 0 minutes 00:07:04.753 00:07:04.753 Number of Queues 00:07:04.753 ================ 00:07:04.753 Number of I/O Submission Queues: 64 00:07:04.753 Number of I/O Completion Queues: 64 00:07:04.753 00:07:04.753 ZNS Specific Controller Data 00:07:04.753 ============================ 00:07:04.753 Zone Append Size Limit: 0 00:07:04.753 00:07:04.753 00:07:04.753 Active Namespaces 00:07:04.753 ================= 00:07:04.753 Namespace ID:1 00:07:04.753 Error Recovery Timeout: Unlimited 00:07:04.753 Command Set Identifier: NVM (00h) 00:07:04.753 Deallocate: Supported 00:07:04.753 Deallocated/Unwritten Error: Supported 00:07:04.753 Deallocated Read Value: All 0x00 00:07:04.753 Deallocate in Write Zeroes: Not Supported 00:07:04.753 Deallocated Guard Field: 0xFFFF 00:07:04.753 Flush: Supported 00:07:04.753 Reservation: Not Supported 00:07:04.753 Namespace Sharing Capabilities: Private 00:07:04.753 Size (in LBAs): 1048576 (4GiB) 00:07:04.753 Capacity (in LBAs): 1048576 (4GiB) 00:07:04.753 Utilization (in LBAs): 1048576 (4GiB) 00:07:04.753 Thin Provisioning: Not Supported 00:07:04.753 Per-NS Atomic Units: No 00:07:04.753 Maximum Single Source Range Length: 128 00:07:04.753 Maximum Copy Length: 128 00:07:04.753 Maximum Source Range Count: 128 00:07:04.753 NGUID/EUI64 Never Reused: No 00:07:04.753 Namespace Write Protected: No 00:07:04.753 Number of LBA Formats: 8 00:07:04.753 Current LBA Format: LBA Format #04 00:07:04.753 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:04.753 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:04.753 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:04.753 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:04.753 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:04.753 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:04.753 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:04.753 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:04.753 00:07:04.753 NVM Specific Namespace Data 00:07:04.753 =========================== 00:07:04.753 Logical Block Storage Tag Mask: 0 00:07:04.753 Protection Information Capabilities: 00:07:04.753 16b Guard Protection Information Storage Tag Support: No 00:07:04.753 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:04.753 Storage Tag Check Read Support: No 00:07:04.753 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Namespace ID:2 00:07:04.753 Error Recovery Timeout: Unlimited 00:07:04.753 Command Set Identifier: NVM (00h) 00:07:04.753 Deallocate: Supported 00:07:04.753 Deallocated/Unwritten Error: Supported 00:07:04.753 Deallocated Read Value: All 0x00 00:07:04.753 Deallocate in Write Zeroes: Not Supported 00:07:04.753 Deallocated Guard Field: 0xFFFF 00:07:04.753 Flush: Supported 00:07:04.753 Reservation: Not Supported 00:07:04.753 Namespace Sharing Capabilities: Private 00:07:04.753 Size (in LBAs): 1048576 (4GiB) 00:07:04.753 Capacity (in LBAs): 1048576 (4GiB) 00:07:04.753 Utilization (in LBAs): 1048576 (4GiB) 00:07:04.753 Thin Provisioning: Not Supported 00:07:04.753 Per-NS Atomic Units: No 00:07:04.753 Maximum Single Source Range Length: 128 00:07:04.753 Maximum Copy Length: 128 00:07:04.753 Maximum Source Range Count: 128 00:07:04.753 NGUID/EUI64 Never Reused: No 00:07:04.753 Namespace Write Protected: No 00:07:04.753 Number of LBA Formats: 8 00:07:04.753 Current LBA Format: LBA Format #04 00:07:04.753 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:04.753 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:04.753 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:04.753 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:04.753 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:04.753 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:04.753 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:04.753 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:04.753 00:07:04.753 NVM Specific Namespace Data 00:07:04.753 =========================== 00:07:04.753 Logical Block Storage Tag Mask: 0 00:07:04.753 Protection Information Capabilities: 00:07:04.753 16b Guard Protection Information Storage Tag Support: No 00:07:04.753 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:04.753 Storage Tag Check Read Support: No 00:07:04.753 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.753 Namespace ID:3 00:07:04.753 Error Recovery Timeout: Unlimited 00:07:04.753 Command Set Identifier: NVM (00h) 00:07:04.753 Deallocate: Supported 00:07:04.753 Deallocated/Unwritten Error: Supported 00:07:04.753 Deallocated Read Value: All 0x00 00:07:04.753 Deallocate in Write Zeroes: Not Supported 00:07:04.753 Deallocated Guard Field: 0xFFFF 00:07:04.753 Flush: Supported 00:07:04.753 Reservation: Not Supported 00:07:04.753 Namespace Sharing Capabilities: Private 00:07:04.753 Size (in LBAs): 1048576 (4GiB) 00:07:04.753 Capacity (in LBAs): 1048576 (4GiB) 00:07:04.753 Utilization (in LBAs): 1048576 (4GiB) 00:07:04.753 Thin Provisioning: Not Supported 00:07:04.753 Per-NS Atomic Units: No 00:07:04.753 Maximum Single Source Range Length: 128 00:07:04.753 Maximum Copy Length: 128 00:07:04.753 Maximum Source Range Count: 128 00:07:04.753 NGUID/EUI64 Never Reused: No 00:07:04.753 Namespace Write Protected: No 00:07:04.753 Number of LBA Formats: 8 00:07:04.753 Current LBA Format: LBA Format #04 00:07:04.753 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:04.753 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:04.753 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:04.753 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:04.753 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:04.753 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:04.754 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:04.754 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:04.754 00:07:04.754 NVM Specific Namespace Data 00:07:04.754 =========================== 00:07:04.754 Logical Block Storage Tag Mask: 0 00:07:04.754 Protection Information Capabilities: 00:07:04.754 16b Guard Protection Information Storage Tag Support: No 00:07:04.754 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:04.754 Storage Tag Check Read Support: No 00:07:04.754 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.754 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.754 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.754 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.754 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.754 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.754 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.754 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.754 02:54:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:04.754 02:54:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:04.754 ===================================================== 00:07:04.754 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:04.754 ===================================================== 00:07:04.754 Controller Capabilities/Features 00:07:04.754 ================================ 00:07:04.754 Vendor ID: 1b36 00:07:04.754 Subsystem Vendor ID: 1af4 00:07:04.754 Serial Number: 12343 00:07:04.754 Model Number: QEMU NVMe Ctrl 00:07:04.754 Firmware Version: 8.0.0 00:07:04.754 Recommended Arb Burst: 6 00:07:04.754 IEEE OUI Identifier: 00 54 52 00:07:04.754 Multi-path I/O 00:07:04.754 May have multiple subsystem ports: No 00:07:04.754 May have multiple controllers: Yes 00:07:04.754 Associated with SR-IOV VF: No 00:07:04.754 Max Data Transfer Size: 524288 00:07:04.754 Max Number of Namespaces: 256 00:07:04.754 Max Number of I/O Queues: 64 00:07:04.754 NVMe Specification Version (VS): 1.4 00:07:04.754 NVMe Specification Version (Identify): 1.4 00:07:04.754 Maximum Queue Entries: 2048 00:07:04.754 Contiguous Queues Required: Yes 00:07:04.754 Arbitration Mechanisms Supported 00:07:04.754 Weighted Round Robin: Not Supported 00:07:04.754 Vendor Specific: Not Supported 00:07:04.754 Reset Timeout: 7500 ms 00:07:04.754 Doorbell Stride: 4 bytes 00:07:04.754 NVM Subsystem Reset: Not Supported 00:07:04.754 Command Sets Supported 00:07:04.754 NVM Command Set: Supported 00:07:04.754 Boot Partition: Not Supported 00:07:04.754 Memory Page Size Minimum: 4096 bytes 00:07:04.754 Memory Page Size Maximum: 65536 bytes 00:07:04.754 Persistent Memory Region: Not Supported 00:07:04.754 Optional Asynchronous Events Supported 00:07:04.754 Namespace Attribute Notices: Supported 00:07:04.754 Firmware Activation Notices: Not Supported 00:07:04.754 ANA Change Notices: Not Supported 00:07:04.754 PLE Aggregate Log Change Notices: Not Supported 00:07:04.754 LBA Status Info Alert Notices: Not Supported 00:07:04.754 EGE Aggregate Log Change Notices: Not Supported 00:07:04.754 Normal NVM Subsystem Shutdown event: Not Supported 00:07:04.754 Zone Descriptor Change Notices: Not Supported 00:07:04.754 Discovery Log Change Notices: Not Supported 00:07:04.754 Controller Attributes 00:07:04.754 128-bit Host Identifier: Not Supported 00:07:04.754 Non-Operational Permissive Mode: Not Supported 00:07:04.754 NVM Sets: Not Supported 00:07:04.754 Read Recovery Levels: Not Supported 00:07:04.754 Endurance Groups: Supported 00:07:04.754 Predictable Latency Mode: Not Supported 00:07:04.754 Traffic Based Keep ALive: Not Supported 00:07:04.754 Namespace Granularity: Not Supported 00:07:04.754 SQ Associations: Not Supported 00:07:04.754 UUID List: Not Supported 00:07:04.754 Multi-Domain Subsystem: Not Supported 00:07:04.754 Fixed Capacity Management: Not Supported 00:07:04.754 Variable Capacity Management: Not Supported 00:07:04.754 Delete Endurance Group: Not Supported 00:07:04.754 Delete NVM Set: Not Supported 00:07:04.754 Extended LBA Formats Supported: Supported 00:07:04.754 Flexible Data Placement Supported: Supported 00:07:04.754 00:07:04.754 Controller Memory Buffer Support 00:07:04.754 ================================ 00:07:04.754 Supported: No 00:07:04.754 00:07:04.754 Persistent Memory Region Support 00:07:04.754 ================================ 00:07:04.754 Supported: No 00:07:04.754 00:07:04.754 Admin Command Set Attributes 00:07:04.754 ============================ 00:07:04.754 Security Send/Receive: Not Supported 00:07:04.754 Format NVM: Supported 00:07:04.754 Firmware Activate/Download: Not Supported 00:07:04.754 Namespace Management: Supported 00:07:04.754 Device Self-Test: Not Supported 00:07:04.754 Directives: Supported 00:07:04.754 NVMe-MI: Not Supported 00:07:04.754 Virtualization Management: Not Supported 00:07:04.754 Doorbell Buffer Config: Supported 00:07:04.754 Get LBA Status Capability: Not Supported 00:07:04.754 Command & Feature Lockdown Capability: Not Supported 00:07:04.754 Abort Command Limit: 4 00:07:04.754 Async Event Request Limit: 4 00:07:04.754 Number of Firmware Slots: N/A 00:07:04.754 Firmware Slot 1 Read-Only: N/A 00:07:04.754 Firmware Activation Without Reset: N/A 00:07:04.754 Multiple Update Detection Support: N/A 00:07:04.754 Firmware Update Granularity: No Information Provided 00:07:04.754 Per-Namespace SMART Log: Yes 00:07:04.754 Asymmetric Namespace Access Log Page: Not Supported 00:07:04.754 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:04.754 Command Effects Log Page: Supported 00:07:04.754 Get Log Page Extended Data: Supported 00:07:04.754 Telemetry Log Pages: Not Supported 00:07:04.754 Persistent Event Log Pages: Not Supported 00:07:04.754 Supported Log Pages Log Page: May Support 00:07:04.754 Commands Supported & Effects Log Page: Not Supported 00:07:04.754 Feature Identifiers & Effects Log Page:May Support 00:07:04.754 NVMe-MI Commands & Effects Log Page: May Support 00:07:04.754 Data Area 4 for Telemetry Log: Not Supported 00:07:04.754 Error Log Page Entries Supported: 1 00:07:04.754 Keep Alive: Not Supported 00:07:04.754 00:07:04.754 NVM Command Set Attributes 00:07:04.754 ========================== 00:07:04.754 Submission Queue Entry Size 00:07:04.754 Max: 64 00:07:04.754 Min: 64 00:07:04.754 Completion Queue Entry Size 00:07:04.754 Max: 16 00:07:04.754 Min: 16 00:07:04.754 Number of Namespaces: 256 00:07:04.754 Compare Command: Supported 00:07:04.754 Write Uncorrectable Command: Not Supported 00:07:04.754 Dataset Management Command: Supported 00:07:04.754 Write Zeroes Command: Supported 00:07:04.754 Set Features Save Field: Supported 00:07:04.754 Reservations: Not Supported 00:07:04.754 Timestamp: Supported 00:07:04.754 Copy: Supported 00:07:04.754 Volatile Write Cache: Present 00:07:04.754 Atomic Write Unit (Normal): 1 00:07:04.754 Atomic Write Unit (PFail): 1 00:07:04.754 Atomic Compare & Write Unit: 1 00:07:04.754 Fused Compare & Write: Not Supported 00:07:04.754 Scatter-Gather List 00:07:04.754 SGL Command Set: Supported 00:07:04.754 SGL Keyed: Not Supported 00:07:04.754 SGL Bit Bucket Descriptor: Not Supported 00:07:04.754 SGL Metadata Pointer: Not Supported 00:07:04.754 Oversized SGL: Not Supported 00:07:04.754 SGL Metadata Address: Not Supported 00:07:04.754 SGL Offset: Not Supported 00:07:04.754 Transport SGL Data Block: Not Supported 00:07:04.754 Replay Protected Memory Block: Not Supported 00:07:04.754 00:07:04.754 Firmware Slot Information 00:07:04.754 ========================= 00:07:04.754 Active slot: 1 00:07:04.754 Slot 1 Firmware Revision: 1.0 00:07:04.754 00:07:04.754 00:07:04.754 Commands Supported and Effects 00:07:04.754 ============================== 00:07:04.754 Admin Commands 00:07:04.754 -------------- 00:07:04.754 Delete I/O Submission Queue (00h): Supported 00:07:04.754 Create I/O Submission Queue (01h): Supported 00:07:04.754 Get Log Page (02h): Supported 00:07:04.754 Delete I/O Completion Queue (04h): Supported 00:07:04.754 Create I/O Completion Queue (05h): Supported 00:07:04.754 Identify (06h): Supported 00:07:04.754 Abort (08h): Supported 00:07:04.754 Set Features (09h): Supported 00:07:04.754 Get Features (0Ah): Supported 00:07:04.754 Asynchronous Event Request (0Ch): Supported 00:07:04.754 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:04.754 Directive Send (19h): Supported 00:07:04.754 Directive Receive (1Ah): Supported 00:07:04.754 Virtualization Management (1Ch): Supported 00:07:04.754 Doorbell Buffer Config (7Ch): Supported 00:07:04.754 Format NVM (80h): Supported LBA-Change 00:07:04.754 I/O Commands 00:07:04.754 ------------ 00:07:04.754 Flush (00h): Supported LBA-Change 00:07:04.754 Write (01h): Supported LBA-Change 00:07:04.754 Read (02h): Supported 00:07:04.754 Compare (05h): Supported 00:07:04.754 Write Zeroes (08h): Supported LBA-Change 00:07:04.754 Dataset Management (09h): Supported LBA-Change 00:07:04.754 Unknown (0Ch): Supported 00:07:04.754 Unknown (12h): Supported 00:07:04.754 Copy (19h): Supported LBA-Change 00:07:04.755 Unknown (1Dh): Supported LBA-Change 00:07:04.755 00:07:04.755 Error Log 00:07:04.755 ========= 00:07:04.755 00:07:04.755 Arbitration 00:07:04.755 =========== 00:07:04.755 Arbitration Burst: no limit 00:07:04.755 00:07:04.755 Power Management 00:07:04.755 ================ 00:07:04.755 Number of Power States: 1 00:07:04.755 Current Power State: Power State #0 00:07:04.755 Power State #0: 00:07:04.755 Max Power: 25.00 W 00:07:04.755 Non-Operational State: Operational 00:07:04.755 Entry Latency: 16 microseconds 00:07:04.755 Exit Latency: 4 microseconds 00:07:04.755 Relative Read Throughput: 0 00:07:04.755 Relative Read Latency: 0 00:07:04.755 Relative Write Throughput: 0 00:07:04.755 Relative Write Latency: 0 00:07:04.755 Idle Power: Not Reported 00:07:04.755 Active Power: Not Reported 00:07:04.755 Non-Operational Permissive Mode: Not Supported 00:07:04.755 00:07:04.755 Health Information 00:07:04.755 ================== 00:07:04.755 Critical Warnings: 00:07:04.755 Available Spare Space: OK 00:07:04.755 Temperature: OK 00:07:04.755 Device Reliability: OK 00:07:04.755 Read Only: No 00:07:04.755 Volatile Memory Backup: OK 00:07:04.755 Current Temperature: 323 Kelvin (50 Celsius) 00:07:04.755 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:04.755 Available Spare: 0% 00:07:04.755 Available Spare Threshold: 0% 00:07:04.755 Life Percentage Used: 0% 00:07:04.755 Data Units Read: 821 00:07:04.755 Data Units Written: 750 00:07:04.755 Host Read Commands: 39669 00:07:04.755 Host Write Commands: 39093 00:07:04.755 Controller Busy Time: 0 minutes 00:07:04.755 Power Cycles: 0 00:07:04.755 Power On Hours: 0 hours 00:07:04.755 Unsafe Shutdowns: 0 00:07:04.755 Unrecoverable Media Errors: 0 00:07:04.755 Lifetime Error Log Entries: 0 00:07:04.755 Warning Temperature Time: 0 minutes 00:07:04.755 Critical Temperature Time: 0 minutes 00:07:04.755 00:07:04.755 Number of Queues 00:07:04.755 ================ 00:07:04.755 Number of I/O Submission Queues: 64 00:07:04.755 Number of I/O Completion Queues: 64 00:07:04.755 00:07:04.755 ZNS Specific Controller Data 00:07:04.755 ============================ 00:07:04.755 Zone Append Size Limit: 0 00:07:04.755 00:07:04.755 00:07:04.755 Active Namespaces 00:07:04.755 ================= 00:07:04.755 Namespace ID:1 00:07:04.755 Error Recovery Timeout: Unlimited 00:07:04.755 Command Set Identifier: NVM (00h) 00:07:04.755 Deallocate: Supported 00:07:04.755 Deallocated/Unwritten Error: Supported 00:07:04.755 Deallocated Read Value: All 0x00 00:07:04.755 Deallocate in Write Zeroes: Not Supported 00:07:04.755 Deallocated Guard Field: 0xFFFF 00:07:04.755 Flush: Supported 00:07:04.755 Reservation: Not Supported 00:07:04.755 Namespace Sharing Capabilities: Multiple Controllers 00:07:04.755 Size (in LBAs): 262144 (1GiB) 00:07:04.755 Capacity (in LBAs): 262144 (1GiB) 00:07:04.755 Utilization (in LBAs): 262144 (1GiB) 00:07:04.755 Thin Provisioning: Not Supported 00:07:04.755 Per-NS Atomic Units: No 00:07:04.755 Maximum Single Source Range Length: 128 00:07:04.755 Maximum Copy Length: 128 00:07:04.755 Maximum Source Range Count: 128 00:07:04.755 NGUID/EUI64 Never Reused: No 00:07:04.755 Namespace Write Protected: No 00:07:04.755 Endurance group ID: 1 00:07:04.755 Number of LBA Formats: 8 00:07:04.755 Current LBA Format: LBA Format #04 00:07:04.755 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:04.755 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:04.755 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:04.755 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:04.755 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:04.755 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:04.755 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:04.755 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:04.755 00:07:04.755 Get Feature FDP: 00:07:04.755 ================ 00:07:04.755 Enabled: Yes 00:07:04.755 FDP configuration index: 0 00:07:04.755 00:07:04.755 FDP configurations log page 00:07:04.755 =========================== 00:07:04.755 Number of FDP configurations: 1 00:07:04.755 Version: 0 00:07:04.755 Size: 112 00:07:04.755 FDP Configuration Descriptor: 0 00:07:04.755 Descriptor Size: 96 00:07:04.755 Reclaim Group Identifier format: 2 00:07:04.755 FDP Volatile Write Cache: Not Present 00:07:04.755 FDP Configuration: Valid 00:07:04.755 Vendor Specific Size: 0 00:07:04.755 Number of Reclaim Groups: 2 00:07:04.755 Number of Recalim Unit Handles: 8 00:07:04.755 Max Placement Identifiers: 128 00:07:04.755 Number of Namespaces Suppprted: 256 00:07:04.755 Reclaim unit Nominal Size: 6000000 bytes 00:07:04.755 Estimated Reclaim Unit Time Limit: Not Reported 00:07:04.755 RUH Desc #000: RUH Type: Initially Isolated 00:07:04.755 RUH Desc #001: RUH Type: Initially Isolated 00:07:04.755 RUH Desc #002: RUH Type: Initially Isolated 00:07:04.755 RUH Desc #003: RUH Type: Initially Isolated 00:07:04.755 RUH Desc #004: RUH Type: Initially Isolated 00:07:04.755 RUH Desc #005: RUH Type: Initially Isolated 00:07:04.755 RUH Desc #006: RUH Type: Initially Isolated 00:07:04.755 RUH Desc #007: RUH Type: Initially Isolated 00:07:04.755 00:07:04.755 FDP reclaim unit handle usage log page 00:07:05.013 ====================================== 00:07:05.013 Number of Reclaim Unit Handles: 8 00:07:05.013 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:05.013 RUH Usage Desc #001: RUH Attributes: Unused 00:07:05.013 RUH Usage Desc #002: RUH Attributes: Unused 00:07:05.013 RUH Usage Desc #003: RUH Attributes: Unused 00:07:05.013 RUH Usage Desc #004: RUH Attributes: Unused 00:07:05.013 RUH Usage Desc #005: RUH Attributes: Unused 00:07:05.013 RUH Usage Desc #006: RUH Attributes: Unused 00:07:05.013 RUH Usage Desc #007: RUH Attributes: Unused 00:07:05.013 00:07:05.013 FDP statistics log page 00:07:05.013 ======================= 00:07:05.013 Host bytes with metadata written: 457023488 00:07:05.013 Media bytes with metadata written: 457076736 00:07:05.013 Media bytes erased: 0 00:07:05.013 00:07:05.013 FDP events log page 00:07:05.013 =================== 00:07:05.013 Number of FDP events: 0 00:07:05.013 00:07:05.013 NVM Specific Namespace Data 00:07:05.013 =========================== 00:07:05.013 Logical Block Storage Tag Mask: 0 00:07:05.013 Protection Information Capabilities: 00:07:05.013 16b Guard Protection Information Storage Tag Support: No 00:07:05.013 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:05.013 Storage Tag Check Read Support: No 00:07:05.013 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.013 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.013 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.013 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.013 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.013 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.013 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.013 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.013 00:07:05.013 real 0m1.243s 00:07:05.013 user 0m0.467s 00:07:05.013 sys 0m0.551s 00:07:05.013 02:54:59 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.013 02:54:59 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:05.013 ************************************ 00:07:05.013 END TEST nvme_identify 00:07:05.013 ************************************ 00:07:05.013 02:54:59 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:05.013 02:54:59 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.013 02:54:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.013 02:54:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.013 ************************************ 00:07:05.013 START TEST nvme_perf 00:07:05.013 ************************************ 00:07:05.013 02:54:59 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:05.013 02:54:59 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:06.388 Initializing NVMe Controllers 00:07:06.388 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:06.388 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:06.388 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:06.388 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:06.388 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:06.388 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:06.388 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:06.388 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:06.388 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:06.388 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:06.388 Initialization complete. Launching workers. 00:07:06.388 ======================================================== 00:07:06.388 Latency(us) 00:07:06.388 Device Information : IOPS MiB/s Average min max 00:07:06.388 PCIE (0000:00:10.0) NSID 1 from core 0: 18274.96 214.16 7026.40 5548.69 27689.98 00:07:06.388 PCIE (0000:00:11.0) NSID 1 from core 0: 18274.96 214.16 7021.64 5606.78 26354.88 00:07:06.388 PCIE (0000:00:13.0) NSID 1 from core 0: 18274.96 214.16 7016.01 5639.66 25246.18 00:07:06.388 PCIE (0000:00:12.0) NSID 1 from core 0: 18274.96 214.16 7009.59 5619.14 23872.32 00:07:06.388 PCIE (0000:00:12.0) NSID 2 from core 0: 18274.96 214.16 7003.28 5610.00 22508.20 00:07:06.388 PCIE (0000:00:12.0) NSID 3 from core 0: 18274.96 214.16 6995.71 5616.42 21121.05 00:07:06.388 ======================================================== 00:07:06.388 Total : 109649.77 1284.96 7012.11 5548.69 27689.98 00:07:06.388 00:07:06.388 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:06.388 ================================================================================= 00:07:06.388 1.00000% : 5671.385us 00:07:06.388 10.00000% : 5873.034us 00:07:06.388 25.00000% : 6099.889us 00:07:06.388 50.00000% : 6452.775us 00:07:06.388 75.00000% : 6956.898us 00:07:06.388 90.00000% : 9225.452us 00:07:06.388 95.00000% : 10435.348us 00:07:06.388 98.00000% : 11292.357us 00:07:06.388 99.00000% : 14216.271us 00:07:06.388 99.50000% : 20164.923us 00:07:06.388 99.90000% : 27222.646us 00:07:06.388 99.99000% : 27625.945us 00:07:06.388 99.99900% : 27827.594us 00:07:06.388 99.99990% : 27827.594us 00:07:06.388 99.99999% : 27827.594us 00:07:06.388 00:07:06.388 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:06.388 ================================================================================= 00:07:06.388 1.00000% : 5747.003us 00:07:06.388 10.00000% : 5923.446us 00:07:06.388 25.00000% : 6125.095us 00:07:06.388 50.00000% : 6427.569us 00:07:06.388 75.00000% : 6906.486us 00:07:06.388 90.00000% : 9275.865us 00:07:06.388 95.00000% : 10485.760us 00:07:06.388 98.00000% : 11241.945us 00:07:06.388 99.00000% : 14518.745us 00:07:06.388 99.50000% : 19156.677us 00:07:06.388 99.90000% : 25811.102us 00:07:06.388 99.99000% : 26416.049us 00:07:06.388 99.99900% : 26416.049us 00:07:06.388 99.99990% : 26416.049us 00:07:06.388 99.99999% : 26416.049us 00:07:06.388 00:07:06.388 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:06.388 ================================================================================= 00:07:06.388 1.00000% : 5747.003us 00:07:06.388 10.00000% : 5923.446us 00:07:06.388 25.00000% : 6125.095us 00:07:06.388 50.00000% : 6427.569us 00:07:06.388 75.00000% : 6906.486us 00:07:06.388 90.00000% : 9326.277us 00:07:06.388 95.00000% : 10485.760us 00:07:06.388 98.00000% : 11443.594us 00:07:06.388 99.00000% : 14216.271us 00:07:06.388 99.50000% : 18047.606us 00:07:06.388 99.90000% : 24802.855us 00:07:06.388 99.99000% : 25306.978us 00:07:06.388 99.99900% : 25306.978us 00:07:06.388 99.99990% : 25306.978us 00:07:06.388 99.99999% : 25306.978us 00:07:06.388 00:07:06.388 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:06.388 ================================================================================= 00:07:06.388 1.00000% : 5747.003us 00:07:06.388 10.00000% : 5923.446us 00:07:06.388 25.00000% : 6125.095us 00:07:06.388 50.00000% : 6427.569us 00:07:06.388 75.00000% : 6856.074us 00:07:06.388 90.00000% : 9275.865us 00:07:06.388 95.00000% : 10586.585us 00:07:06.388 98.00000% : 11594.831us 00:07:06.388 99.00000% : 14619.569us 00:07:06.388 99.50000% : 16434.412us 00:07:06.388 99.90000% : 23391.311us 00:07:06.388 99.99000% : 23895.434us 00:07:06.388 99.99900% : 23895.434us 00:07:06.388 99.99990% : 23895.434us 00:07:06.388 99.99999% : 23895.434us 00:07:06.388 00:07:06.388 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:06.388 ================================================================================= 00:07:06.388 1.00000% : 5747.003us 00:07:06.388 10.00000% : 5923.446us 00:07:06.388 25.00000% : 6125.095us 00:07:06.388 50.00000% : 6427.569us 00:07:06.388 75.00000% : 6856.074us 00:07:06.388 90.00000% : 9225.452us 00:07:06.388 95.00000% : 10636.997us 00:07:06.388 98.00000% : 11645.243us 00:07:06.388 99.00000% : 14216.271us 00:07:06.388 99.50000% : 15829.465us 00:07:06.388 99.90000% : 21979.766us 00:07:06.388 99.99000% : 22483.889us 00:07:06.388 99.99900% : 22584.714us 00:07:06.388 99.99990% : 22584.714us 00:07:06.388 99.99999% : 22584.714us 00:07:06.388 00:07:06.388 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:06.388 ================================================================================= 00:07:06.388 1.00000% : 5747.003us 00:07:06.388 10.00000% : 5923.446us 00:07:06.388 25.00000% : 6125.095us 00:07:06.388 50.00000% : 6427.569us 00:07:06.388 75.00000% : 6856.074us 00:07:06.388 90.00000% : 9175.040us 00:07:06.388 95.00000% : 10636.997us 00:07:06.388 98.00000% : 11695.655us 00:07:06.388 99.00000% : 13913.797us 00:07:06.388 99.50000% : 15728.640us 00:07:06.388 99.90000% : 20669.046us 00:07:06.388 99.99000% : 21173.169us 00:07:06.388 99.99900% : 21173.169us 00:07:06.388 99.99990% : 21173.169us 00:07:06.388 99.99999% : 21173.169us 00:07:06.388 00:07:06.388 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:06.388 ============================================================================== 00:07:06.388 Range in us Cumulative IO count 00:07:06.388 5545.354 - 5570.560: 0.0436% ( 8) 00:07:06.388 5570.560 - 5595.766: 0.1960% ( 28) 00:07:06.388 5595.766 - 5620.972: 0.4410% ( 45) 00:07:06.388 5620.972 - 5646.178: 0.7404% ( 55) 00:07:06.388 5646.178 - 5671.385: 1.3121% ( 105) 00:07:06.388 5671.385 - 5696.591: 2.0906% ( 143) 00:07:06.388 5696.591 - 5721.797: 2.9834% ( 164) 00:07:06.388 5721.797 - 5747.003: 4.0669% ( 199) 00:07:06.388 5747.003 - 5772.209: 5.3027% ( 227) 00:07:06.388 5772.209 - 5797.415: 6.5658% ( 232) 00:07:06.389 5797.415 - 5822.622: 8.0194% ( 267) 00:07:06.389 5822.622 - 5847.828: 9.5547% ( 282) 00:07:06.389 5847.828 - 5873.034: 11.0518% ( 275) 00:07:06.389 5873.034 - 5898.240: 12.6143% ( 287) 00:07:06.389 5898.240 - 5923.446: 14.1823% ( 288) 00:07:06.389 5923.446 - 5948.652: 15.8482% ( 306) 00:07:06.389 5948.652 - 5973.858: 17.5087% ( 305) 00:07:06.389 5973.858 - 5999.065: 19.0712% ( 287) 00:07:06.389 5999.065 - 6024.271: 20.8406% ( 325) 00:07:06.389 6024.271 - 6049.477: 22.5338% ( 311) 00:07:06.389 6049.477 - 6074.683: 24.2051% ( 307) 00:07:06.389 6074.683 - 6099.889: 26.0017% ( 330) 00:07:06.389 6099.889 - 6125.095: 27.7929% ( 329) 00:07:06.389 6125.095 - 6150.302: 29.4861% ( 311) 00:07:06.389 6150.302 - 6175.508: 31.2500% ( 324) 00:07:06.389 6175.508 - 6200.714: 33.1228% ( 344) 00:07:06.389 6200.714 - 6225.920: 35.0773% ( 359) 00:07:06.389 6225.920 - 6251.126: 36.8630% ( 328) 00:07:06.389 6251.126 - 6276.332: 38.6868% ( 335) 00:07:06.389 6276.332 - 6301.538: 40.5379% ( 340) 00:07:06.389 6301.538 - 6326.745: 42.4270% ( 347) 00:07:06.389 6326.745 - 6351.951: 44.1964% ( 325) 00:07:06.389 6351.951 - 6377.157: 46.1128% ( 352) 00:07:06.389 6377.157 - 6402.363: 47.9149% ( 331) 00:07:06.389 6402.363 - 6427.569: 49.9782% ( 379) 00:07:06.389 6427.569 - 6452.775: 51.7367% ( 323) 00:07:06.389 6452.775 - 6503.188: 55.4388% ( 680) 00:07:06.389 6503.188 - 6553.600: 59.2334% ( 697) 00:07:06.389 6553.600 - 6604.012: 62.8648% ( 667) 00:07:06.389 6604.012 - 6654.425: 66.1313% ( 600) 00:07:06.389 6654.425 - 6704.837: 68.9787% ( 523) 00:07:06.389 6704.837 - 6755.249: 71.2652% ( 420) 00:07:06.389 6755.249 - 6805.662: 72.9149% ( 303) 00:07:06.389 6805.662 - 6856.074: 74.1180% ( 221) 00:07:06.389 6856.074 - 6906.486: 74.9891% ( 160) 00:07:06.389 6906.486 - 6956.898: 75.7078% ( 132) 00:07:06.389 6956.898 - 7007.311: 76.2957% ( 108) 00:07:06.389 7007.311 - 7057.723: 76.8619% ( 104) 00:07:06.389 7057.723 - 7108.135: 77.3846% ( 96) 00:07:06.389 7108.135 - 7158.548: 77.8637% ( 88) 00:07:06.389 7158.548 - 7208.960: 78.1958% ( 61) 00:07:06.389 7208.960 - 7259.372: 78.4462% ( 46) 00:07:06.389 7259.372 - 7309.785: 78.7184% ( 50) 00:07:06.389 7309.785 - 7360.197: 78.9852% ( 49) 00:07:06.389 7360.197 - 7410.609: 79.1921% ( 38) 00:07:06.389 7410.609 - 7461.022: 79.3935% ( 37) 00:07:06.389 7461.022 - 7511.434: 79.6113% ( 40) 00:07:06.389 7511.434 - 7561.846: 79.7692% ( 29) 00:07:06.389 7561.846 - 7612.258: 79.9815% ( 39) 00:07:06.389 7612.258 - 7662.671: 80.2591% ( 51) 00:07:06.389 7662.671 - 7713.083: 80.5422% ( 52) 00:07:06.389 7713.083 - 7763.495: 80.9125% ( 68) 00:07:06.389 7763.495 - 7813.908: 81.2881% ( 69) 00:07:06.389 7813.908 - 7864.320: 81.6202% ( 61) 00:07:06.389 7864.320 - 7914.732: 82.0231% ( 74) 00:07:06.389 7914.732 - 7965.145: 82.4477% ( 78) 00:07:06.389 7965.145 - 8015.557: 82.8343% ( 71) 00:07:06.389 8015.557 - 8065.969: 83.2753% ( 81) 00:07:06.389 8065.969 - 8116.382: 83.6945% ( 77) 00:07:06.389 8116.382 - 8166.794: 84.0973% ( 74) 00:07:06.389 8166.794 - 8217.206: 84.4948% ( 73) 00:07:06.389 8217.206 - 8267.618: 84.8704% ( 69) 00:07:06.389 8267.618 - 8318.031: 85.2515% ( 70) 00:07:06.389 8318.031 - 8368.443: 85.5891% ( 62) 00:07:06.389 8368.443 - 8418.855: 85.9266% ( 62) 00:07:06.389 8418.855 - 8469.268: 86.2533% ( 60) 00:07:06.389 8469.268 - 8519.680: 86.5854% ( 61) 00:07:06.389 8519.680 - 8570.092: 86.9175% ( 61) 00:07:06.389 8570.092 - 8620.505: 87.2877% ( 68) 00:07:06.389 8620.505 - 8670.917: 87.5926% ( 56) 00:07:06.389 8670.917 - 8721.329: 87.8757% ( 52) 00:07:06.389 8721.329 - 8771.742: 88.0989% ( 41) 00:07:06.389 8771.742 - 8822.154: 88.3221% ( 41) 00:07:06.389 8822.154 - 8872.566: 88.5671% ( 45) 00:07:06.389 8872.566 - 8922.978: 88.7522% ( 34) 00:07:06.389 8922.978 - 8973.391: 88.9645% ( 39) 00:07:06.389 8973.391 - 9023.803: 89.1823% ( 40) 00:07:06.389 9023.803 - 9074.215: 89.4055% ( 41) 00:07:06.389 9074.215 - 9124.628: 89.6341% ( 42) 00:07:06.389 9124.628 - 9175.040: 89.8356% ( 37) 00:07:06.389 9175.040 - 9225.452: 90.0697% ( 43) 00:07:06.389 9225.452 - 9275.865: 90.2657% ( 36) 00:07:06.389 9275.865 - 9326.277: 90.4726% ( 38) 00:07:06.389 9326.277 - 9376.689: 90.6849% ( 39) 00:07:06.389 9376.689 - 9427.102: 90.8863% ( 37) 00:07:06.389 9427.102 - 9477.514: 91.0986% ( 39) 00:07:06.389 9477.514 - 9527.926: 91.3055% ( 38) 00:07:06.389 9527.926 - 9578.338: 91.4906% ( 34) 00:07:06.389 9578.338 - 9628.751: 91.6975% ( 38) 00:07:06.389 9628.751 - 9679.163: 91.9153% ( 40) 00:07:06.389 9679.163 - 9729.575: 92.1058% ( 35) 00:07:06.389 9729.575 - 9779.988: 92.2909% ( 34) 00:07:06.389 9779.988 - 9830.400: 92.4706% ( 33) 00:07:06.389 9830.400 - 9880.812: 92.6448% ( 32) 00:07:06.389 9880.812 - 9931.225: 92.8354% ( 35) 00:07:06.389 9931.225 - 9981.637: 93.0041% ( 31) 00:07:06.389 9981.637 - 10032.049: 93.2110% ( 38) 00:07:06.389 10032.049 - 10082.462: 93.4342% ( 41) 00:07:06.389 10082.462 - 10132.874: 93.6574% ( 41) 00:07:06.389 10132.874 - 10183.286: 93.8589% ( 37) 00:07:06.389 10183.286 - 10233.698: 94.0494% ( 35) 00:07:06.389 10233.698 - 10284.111: 94.2999% ( 46) 00:07:06.389 10284.111 - 10334.523: 94.5231% ( 41) 00:07:06.389 10334.523 - 10384.935: 94.7572% ( 43) 00:07:06.389 10384.935 - 10435.348: 95.0131% ( 47) 00:07:06.389 10435.348 - 10485.760: 95.2254% ( 39) 00:07:06.389 10485.760 - 10536.172: 95.4377% ( 39) 00:07:06.389 10536.172 - 10586.585: 95.6555% ( 40) 00:07:06.389 10586.585 - 10636.997: 95.9005% ( 45) 00:07:06.389 10636.997 - 10687.409: 96.1019% ( 37) 00:07:06.389 10687.409 - 10737.822: 96.3197% ( 40) 00:07:06.389 10737.822 - 10788.234: 96.4939% ( 32) 00:07:06.389 10788.234 - 10838.646: 96.7117% ( 40) 00:07:06.389 10838.646 - 10889.058: 96.9131% ( 37) 00:07:06.389 10889.058 - 10939.471: 97.0928% ( 33) 00:07:06.389 10939.471 - 10989.883: 97.2561% ( 30) 00:07:06.389 10989.883 - 11040.295: 97.4303% ( 32) 00:07:06.389 11040.295 - 11090.708: 97.6045% ( 32) 00:07:06.389 11090.708 - 11141.120: 97.7352% ( 24) 00:07:06.389 11141.120 - 11191.532: 97.8550% ( 22) 00:07:06.389 11191.532 - 11241.945: 97.9530% ( 18) 00:07:06.389 11241.945 - 11292.357: 98.0183% ( 12) 00:07:06.389 11292.357 - 11342.769: 98.0727% ( 10) 00:07:06.389 11342.769 - 11393.182: 98.1381% ( 12) 00:07:06.389 11393.182 - 11443.594: 98.1925% ( 10) 00:07:06.389 11443.594 - 11494.006: 98.2306% ( 7) 00:07:06.389 11494.006 - 11544.418: 98.2687% ( 7) 00:07:06.389 11544.418 - 11594.831: 98.3177% ( 9) 00:07:06.389 11594.831 - 11645.243: 98.3449% ( 5) 00:07:06.389 11645.243 - 11695.655: 98.3885% ( 8) 00:07:06.389 11695.655 - 11746.068: 98.4266% ( 7) 00:07:06.389 11746.068 - 11796.480: 98.4647% ( 7) 00:07:06.389 11796.480 - 11846.892: 98.4865% ( 4) 00:07:06.389 11846.892 - 11897.305: 98.5246% ( 7) 00:07:06.389 11897.305 - 11947.717: 98.5409% ( 3) 00:07:06.389 11947.717 - 11998.129: 98.5573% ( 3) 00:07:06.389 11998.129 - 12048.542: 98.5791% ( 4) 00:07:06.389 12048.542 - 12098.954: 98.5899% ( 2) 00:07:06.389 12098.954 - 12149.366: 98.6117% ( 4) 00:07:06.389 12149.366 - 12199.778: 98.6280% ( 3) 00:07:06.389 12199.778 - 12250.191: 98.6444% ( 3) 00:07:06.389 12250.191 - 12300.603: 98.6607% ( 3) 00:07:06.389 12300.603 - 12351.015: 98.6770% ( 3) 00:07:06.389 12351.015 - 12401.428: 98.6934% ( 3) 00:07:06.389 12401.428 - 12451.840: 98.7097% ( 3) 00:07:06.389 12451.840 - 12502.252: 98.7260% ( 3) 00:07:06.389 12502.252 - 12552.665: 98.7369% ( 2) 00:07:06.389 12552.665 - 12603.077: 98.7478% ( 2) 00:07:06.389 12603.077 - 12653.489: 98.7533% ( 1) 00:07:06.389 12653.489 - 12703.902: 98.7642% ( 2) 00:07:06.389 12703.902 - 12754.314: 98.7750% ( 2) 00:07:06.389 12754.314 - 12804.726: 98.7805% ( 1) 00:07:06.389 12804.726 - 12855.138: 98.7914% ( 2) 00:07:06.389 12855.138 - 12905.551: 98.8077% ( 3) 00:07:06.389 12905.551 - 13006.375: 98.8295% ( 4) 00:07:06.389 13006.375 - 13107.200: 98.8622% ( 6) 00:07:06.389 13107.200 - 13208.025: 98.9003% ( 7) 00:07:06.389 13208.025 - 13308.849: 98.9329% ( 6) 00:07:06.389 13308.849 - 13409.674: 98.9547% ( 4) 00:07:06.389 13812.972 - 13913.797: 98.9601% ( 1) 00:07:06.389 13913.797 - 14014.622: 98.9765% ( 3) 00:07:06.389 14014.622 - 14115.446: 98.9928% ( 3) 00:07:06.389 14115.446 - 14216.271: 99.0146% ( 4) 00:07:06.389 14216.271 - 14317.095: 99.0309% ( 3) 00:07:06.389 14317.095 - 14417.920: 99.0473% ( 3) 00:07:06.389 14417.920 - 14518.745: 99.0636% ( 3) 00:07:06.389 14518.745 - 14619.569: 99.0799% ( 3) 00:07:06.389 14619.569 - 14720.394: 99.1017% ( 4) 00:07:06.389 14720.394 - 14821.218: 99.1180% ( 3) 00:07:06.389 14821.218 - 14922.043: 99.1398% ( 4) 00:07:06.389 14922.043 - 15022.868: 99.1561% ( 3) 00:07:06.389 15022.868 - 15123.692: 99.1779% ( 4) 00:07:06.389 15123.692 - 15224.517: 99.1943% ( 3) 00:07:06.389 15224.517 - 15325.342: 99.2160% ( 4) 00:07:06.389 15325.342 - 15426.166: 99.2378% ( 4) 00:07:06.389 15426.166 - 15526.991: 99.2541% ( 3) 00:07:06.389 15526.991 - 15627.815: 99.2705% ( 3) 00:07:06.389 15627.815 - 15728.640: 99.2868% ( 3) 00:07:06.389 15728.640 - 15829.465: 99.3031% ( 3) 00:07:06.389 19156.677 - 19257.502: 99.3140% ( 2) 00:07:06.389 19257.502 - 19358.326: 99.3358% ( 4) 00:07:06.389 19358.326 - 19459.151: 99.3576% ( 4) 00:07:06.389 19459.151 - 19559.975: 99.3794% ( 4) 00:07:06.389 19559.975 - 19660.800: 99.4066% ( 5) 00:07:06.389 19660.800 - 19761.625: 99.4229% ( 3) 00:07:06.389 19761.625 - 19862.449: 99.4501% ( 5) 00:07:06.389 19862.449 - 19963.274: 99.4665% ( 3) 00:07:06.389 19963.274 - 20064.098: 99.4882% ( 4) 00:07:06.389 20064.098 - 20164.923: 99.5100% ( 4) 00:07:06.389 20164.923 - 20265.748: 99.5372% ( 5) 00:07:06.389 20265.748 - 20366.572: 99.5536% ( 3) 00:07:06.389 20366.572 - 20467.397: 99.5808% ( 5) 00:07:06.389 20467.397 - 20568.222: 99.6026% ( 4) 00:07:06.389 20568.222 - 20669.046: 99.6243% ( 4) 00:07:06.390 20669.046 - 20769.871: 99.6461% ( 4) 00:07:06.390 20769.871 - 20870.695: 99.6516% ( 1) 00:07:06.390 25508.628 - 25609.452: 99.6570% ( 1) 00:07:06.390 25609.452 - 25710.277: 99.6733% ( 3) 00:07:06.390 25710.277 - 25811.102: 99.6897% ( 3) 00:07:06.390 25811.102 - 26012.751: 99.7278% ( 7) 00:07:06.390 26012.751 - 26214.400: 99.7605% ( 6) 00:07:06.390 26214.400 - 26416.049: 99.7931% ( 6) 00:07:06.390 26416.049 - 26617.698: 99.8258% ( 6) 00:07:06.390 26617.698 - 26819.348: 99.8530% ( 5) 00:07:06.390 26819.348 - 27020.997: 99.8911% ( 7) 00:07:06.390 27020.997 - 27222.646: 99.9292% ( 7) 00:07:06.390 27222.646 - 27424.295: 99.9564% ( 5) 00:07:06.390 27424.295 - 27625.945: 99.9946% ( 7) 00:07:06.390 27625.945 - 27827.594: 100.0000% ( 1) 00:07:06.390 00:07:06.390 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:06.390 ============================================================================== 00:07:06.390 Range in us Cumulative IO count 00:07:06.390 5595.766 - 5620.972: 0.0109% ( 2) 00:07:06.390 5620.972 - 5646.178: 0.0980% ( 16) 00:07:06.390 5646.178 - 5671.385: 0.2123% ( 21) 00:07:06.390 5671.385 - 5696.591: 0.4682% ( 47) 00:07:06.390 5696.591 - 5721.797: 0.9473% ( 88) 00:07:06.390 5721.797 - 5747.003: 1.5407% ( 109) 00:07:06.390 5747.003 - 5772.209: 2.3247% ( 144) 00:07:06.390 5772.209 - 5797.415: 3.3155% ( 182) 00:07:06.390 5797.415 - 5822.622: 4.3717% ( 194) 00:07:06.390 5822.622 - 5847.828: 5.7709% ( 257) 00:07:06.390 5847.828 - 5873.034: 7.2899% ( 279) 00:07:06.390 5873.034 - 5898.240: 8.9231% ( 300) 00:07:06.390 5898.240 - 5923.446: 10.6326% ( 314) 00:07:06.390 5923.446 - 5948.652: 12.4782% ( 339) 00:07:06.390 5948.652 - 5973.858: 14.4436% ( 361) 00:07:06.390 5973.858 - 5999.065: 16.2729% ( 336) 00:07:06.390 5999.065 - 6024.271: 18.1838% ( 351) 00:07:06.390 6024.271 - 6049.477: 20.1328% ( 358) 00:07:06.390 6049.477 - 6074.683: 22.0928% ( 360) 00:07:06.390 6074.683 - 6099.889: 24.1779% ( 383) 00:07:06.390 6099.889 - 6125.095: 26.2631% ( 383) 00:07:06.390 6125.095 - 6150.302: 28.3537% ( 384) 00:07:06.390 6150.302 - 6175.508: 30.4116% ( 378) 00:07:06.390 6175.508 - 6200.714: 32.5022% ( 384) 00:07:06.390 6200.714 - 6225.920: 34.6145% ( 388) 00:07:06.390 6225.920 - 6251.126: 36.7432% ( 391) 00:07:06.390 6251.126 - 6276.332: 38.9373% ( 403) 00:07:06.390 6276.332 - 6301.538: 41.0279% ( 384) 00:07:06.390 6301.538 - 6326.745: 43.1784% ( 395) 00:07:06.390 6326.745 - 6351.951: 45.2853% ( 387) 00:07:06.390 6351.951 - 6377.157: 47.4303% ( 394) 00:07:06.390 6377.157 - 6402.363: 49.6080% ( 400) 00:07:06.390 6402.363 - 6427.569: 51.7367% ( 391) 00:07:06.390 6427.569 - 6452.775: 53.9689% ( 410) 00:07:06.390 6452.775 - 6503.188: 58.2426% ( 785) 00:07:06.390 6503.188 - 6553.600: 62.3095% ( 747) 00:07:06.390 6553.600 - 6604.012: 65.9408% ( 667) 00:07:06.390 6604.012 - 6654.425: 68.8153% ( 528) 00:07:06.390 6654.425 - 6704.837: 70.9604% ( 394) 00:07:06.390 6704.837 - 6755.249: 72.6426% ( 309) 00:07:06.390 6755.249 - 6805.662: 73.7478% ( 203) 00:07:06.390 6805.662 - 6856.074: 74.6298% ( 162) 00:07:06.390 6856.074 - 6906.486: 75.3811% ( 138) 00:07:06.390 6906.486 - 6956.898: 76.0507% ( 123) 00:07:06.390 6956.898 - 7007.311: 76.6169% ( 104) 00:07:06.390 7007.311 - 7057.723: 77.1614% ( 100) 00:07:06.390 7057.723 - 7108.135: 77.5588% ( 73) 00:07:06.390 7108.135 - 7158.548: 77.9018% ( 63) 00:07:06.390 7158.548 - 7208.960: 78.1958% ( 54) 00:07:06.390 7208.960 - 7259.372: 78.5388% ( 63) 00:07:06.390 7259.372 - 7309.785: 78.8110% ( 50) 00:07:06.390 7309.785 - 7360.197: 79.0179% ( 38) 00:07:06.390 7360.197 - 7410.609: 79.2356% ( 40) 00:07:06.390 7410.609 - 7461.022: 79.4262% ( 35) 00:07:06.390 7461.022 - 7511.434: 79.6276% ( 37) 00:07:06.390 7511.434 - 7561.846: 79.7855% ( 29) 00:07:06.390 7561.846 - 7612.258: 79.9815% ( 36) 00:07:06.390 7612.258 - 7662.671: 80.1557% ( 32) 00:07:06.390 7662.671 - 7713.083: 80.4061% ( 46) 00:07:06.390 7713.083 - 7763.495: 80.7872% ( 70) 00:07:06.390 7763.495 - 7813.908: 81.1847% ( 73) 00:07:06.390 7813.908 - 7864.320: 81.5767% ( 72) 00:07:06.390 7864.320 - 7914.732: 82.0231% ( 82) 00:07:06.390 7914.732 - 7965.145: 82.4477% ( 78) 00:07:06.390 7965.145 - 8015.557: 82.9051% ( 84) 00:07:06.390 8015.557 - 8065.969: 83.3243% ( 77) 00:07:06.390 8065.969 - 8116.382: 83.7380% ( 76) 00:07:06.390 8116.382 - 8166.794: 84.1518% ( 76) 00:07:06.390 8166.794 - 8217.206: 84.6037% ( 83) 00:07:06.390 8217.206 - 8267.618: 85.0011% ( 73) 00:07:06.390 8267.618 - 8318.031: 85.3822% ( 70) 00:07:06.390 8318.031 - 8368.443: 85.7796% ( 73) 00:07:06.390 8368.443 - 8418.855: 86.1879% ( 75) 00:07:06.390 8418.855 - 8469.268: 86.6071% ( 77) 00:07:06.390 8469.268 - 8519.680: 87.0318% ( 78) 00:07:06.390 8519.680 - 8570.092: 87.4129% ( 70) 00:07:06.390 8570.092 - 8620.505: 87.6633% ( 46) 00:07:06.390 8620.505 - 8670.917: 87.8593% ( 36) 00:07:06.390 8670.917 - 8721.329: 88.0553% ( 36) 00:07:06.390 8721.329 - 8771.742: 88.2622% ( 38) 00:07:06.390 8771.742 - 8822.154: 88.4745% ( 39) 00:07:06.390 8822.154 - 8872.566: 88.6868% ( 39) 00:07:06.390 8872.566 - 8922.978: 88.8774% ( 35) 00:07:06.390 8922.978 - 8973.391: 89.0788% ( 37) 00:07:06.390 8973.391 - 9023.803: 89.2585% ( 33) 00:07:06.390 9023.803 - 9074.215: 89.4382% ( 33) 00:07:06.390 9074.215 - 9124.628: 89.6178% ( 33) 00:07:06.390 9124.628 - 9175.040: 89.8029% ( 34) 00:07:06.390 9175.040 - 9225.452: 89.9826% ( 33) 00:07:06.390 9225.452 - 9275.865: 90.1731% ( 35) 00:07:06.390 9275.865 - 9326.277: 90.3419% ( 31) 00:07:06.390 9326.277 - 9376.689: 90.5542% ( 39) 00:07:06.390 9376.689 - 9427.102: 90.7339% ( 33) 00:07:06.390 9427.102 - 9477.514: 90.8809% ( 27) 00:07:06.390 9477.514 - 9527.926: 91.0823% ( 37) 00:07:06.390 9527.926 - 9578.338: 91.2565% ( 32) 00:07:06.390 9578.338 - 9628.751: 91.4199% ( 30) 00:07:06.390 9628.751 - 9679.163: 91.5941% ( 32) 00:07:06.390 9679.163 - 9729.575: 91.7574% ( 30) 00:07:06.390 9729.575 - 9779.988: 91.9371% ( 33) 00:07:06.390 9779.988 - 9830.400: 92.1385% ( 37) 00:07:06.390 9830.400 - 9880.812: 92.3236% ( 34) 00:07:06.390 9880.812 - 9931.225: 92.5468% ( 41) 00:07:06.390 9931.225 - 9981.637: 92.7591% ( 39) 00:07:06.390 9981.637 - 10032.049: 92.9497% ( 35) 00:07:06.390 10032.049 - 10082.462: 93.1457% ( 36) 00:07:06.390 10082.462 - 10132.874: 93.3689% ( 41) 00:07:06.390 10132.874 - 10183.286: 93.5812% ( 39) 00:07:06.390 10183.286 - 10233.698: 93.8371% ( 47) 00:07:06.390 10233.698 - 10284.111: 94.0984% ( 48) 00:07:06.390 10284.111 - 10334.523: 94.3543% ( 47) 00:07:06.390 10334.523 - 10384.935: 94.6374% ( 52) 00:07:06.390 10384.935 - 10435.348: 94.9205% ( 52) 00:07:06.390 10435.348 - 10485.760: 95.1982% ( 51) 00:07:06.390 10485.760 - 10536.172: 95.4758% ( 51) 00:07:06.390 10536.172 - 10586.585: 95.7208% ( 45) 00:07:06.390 10586.585 - 10636.997: 95.9331% ( 39) 00:07:06.390 10636.997 - 10687.409: 96.1400% ( 38) 00:07:06.390 10687.409 - 10737.822: 96.3578% ( 40) 00:07:06.390 10737.822 - 10788.234: 96.5429% ( 34) 00:07:06.390 10788.234 - 10838.646: 96.7552% ( 39) 00:07:06.390 10838.646 - 10889.058: 96.9294% ( 32) 00:07:06.390 10889.058 - 10939.471: 97.1254% ( 36) 00:07:06.390 10939.471 - 10989.883: 97.3051% ( 33) 00:07:06.390 10989.883 - 11040.295: 97.4956% ( 35) 00:07:06.390 11040.295 - 11090.708: 97.6753% ( 33) 00:07:06.390 11090.708 - 11141.120: 97.8114% ( 25) 00:07:06.390 11141.120 - 11191.532: 97.9366% ( 23) 00:07:06.390 11191.532 - 11241.945: 98.0455% ( 20) 00:07:06.390 11241.945 - 11292.357: 98.1598% ( 21) 00:07:06.390 11292.357 - 11342.769: 98.2470% ( 16) 00:07:06.390 11342.769 - 11393.182: 98.3123% ( 12) 00:07:06.390 11393.182 - 11443.594: 98.3885% ( 14) 00:07:06.390 11443.594 - 11494.006: 98.4593% ( 13) 00:07:06.390 11494.006 - 11544.418: 98.5083% ( 9) 00:07:06.390 11544.418 - 11594.831: 98.5355% ( 5) 00:07:06.390 11594.831 - 11645.243: 98.5573% ( 4) 00:07:06.390 11645.243 - 11695.655: 98.5791% ( 4) 00:07:06.390 11695.655 - 11746.068: 98.6008% ( 4) 00:07:06.390 11746.068 - 11796.480: 98.6063% ( 1) 00:07:06.390 12401.428 - 12451.840: 98.6226% ( 3) 00:07:06.390 12451.840 - 12502.252: 98.6444% ( 4) 00:07:06.390 12502.252 - 12552.665: 98.6662% ( 4) 00:07:06.390 12552.665 - 12603.077: 98.6879% ( 4) 00:07:06.390 12603.077 - 12653.489: 98.7043% ( 3) 00:07:06.390 12653.489 - 12703.902: 98.7260% ( 4) 00:07:06.390 12703.902 - 12754.314: 98.7478% ( 4) 00:07:06.390 12754.314 - 12804.726: 98.7642% ( 3) 00:07:06.390 12804.726 - 12855.138: 98.7859% ( 4) 00:07:06.390 12855.138 - 12905.551: 98.8077% ( 4) 00:07:06.390 12905.551 - 13006.375: 98.8404% ( 6) 00:07:06.390 13006.375 - 13107.200: 98.8785% ( 7) 00:07:06.390 13107.200 - 13208.025: 98.9220% ( 8) 00:07:06.390 13208.025 - 13308.849: 98.9547% ( 6) 00:07:06.390 14216.271 - 14317.095: 98.9710% ( 3) 00:07:06.390 14317.095 - 14417.920: 98.9874% ( 3) 00:07:06.390 14417.920 - 14518.745: 99.0091% ( 4) 00:07:06.390 14518.745 - 14619.569: 99.0309% ( 4) 00:07:06.390 14619.569 - 14720.394: 99.0527% ( 4) 00:07:06.390 14720.394 - 14821.218: 99.0745% ( 4) 00:07:06.390 14821.218 - 14922.043: 99.0963% ( 4) 00:07:06.390 14922.043 - 15022.868: 99.1126% ( 3) 00:07:06.390 15022.868 - 15123.692: 99.1344% ( 4) 00:07:06.390 15123.692 - 15224.517: 99.1561% ( 4) 00:07:06.390 15224.517 - 15325.342: 99.1779% ( 4) 00:07:06.390 15325.342 - 15426.166: 99.1888% ( 2) 00:07:06.390 15426.166 - 15526.991: 99.2106% ( 4) 00:07:06.390 15526.991 - 15627.815: 99.2269% ( 3) 00:07:06.390 15627.815 - 15728.640: 99.2487% ( 4) 00:07:06.390 15728.640 - 15829.465: 99.2705% ( 4) 00:07:06.390 15829.465 - 15930.289: 99.2922% ( 4) 00:07:06.390 15930.289 - 16031.114: 99.3031% ( 2) 00:07:06.390 18148.431 - 18249.255: 99.3140% ( 2) 00:07:06.390 18249.255 - 18350.080: 99.3358% ( 4) 00:07:06.390 18350.080 - 18450.905: 99.3576% ( 4) 00:07:06.390 18450.905 - 18551.729: 99.3794% ( 4) 00:07:06.391 18551.729 - 18652.554: 99.4011% ( 4) 00:07:06.391 18652.554 - 18753.378: 99.4229% ( 4) 00:07:06.391 18753.378 - 18854.203: 99.4501% ( 5) 00:07:06.391 18854.203 - 18955.028: 99.4719% ( 4) 00:07:06.391 18955.028 - 19055.852: 99.4937% ( 4) 00:07:06.391 19055.852 - 19156.677: 99.5155% ( 4) 00:07:06.391 19156.677 - 19257.502: 99.5427% ( 5) 00:07:06.391 19257.502 - 19358.326: 99.5645% ( 4) 00:07:06.391 19358.326 - 19459.151: 99.5862% ( 4) 00:07:06.391 19459.151 - 19559.975: 99.6135% ( 5) 00:07:06.391 19559.975 - 19660.800: 99.6352% ( 4) 00:07:06.391 19660.800 - 19761.625: 99.6516% ( 3) 00:07:06.391 24399.557 - 24500.382: 99.6625% ( 2) 00:07:06.391 24500.382 - 24601.206: 99.6788% ( 3) 00:07:06.391 24601.206 - 24702.031: 99.6951% ( 3) 00:07:06.391 24702.031 - 24802.855: 99.7169% ( 4) 00:07:06.391 24802.855 - 24903.680: 99.7332% ( 3) 00:07:06.391 24903.680 - 25004.505: 99.7496% ( 3) 00:07:06.391 25004.505 - 25105.329: 99.7713% ( 4) 00:07:06.391 25105.329 - 25206.154: 99.7877% ( 3) 00:07:06.391 25206.154 - 25306.978: 99.8095% ( 4) 00:07:06.391 25306.978 - 25407.803: 99.8258% ( 3) 00:07:06.391 25407.803 - 25508.628: 99.8476% ( 4) 00:07:06.391 25508.628 - 25609.452: 99.8639% ( 3) 00:07:06.391 25609.452 - 25710.277: 99.8802% ( 3) 00:07:06.391 25710.277 - 25811.102: 99.9020% ( 4) 00:07:06.391 25811.102 - 26012.751: 99.9401% ( 7) 00:07:06.391 26012.751 - 26214.400: 99.9728% ( 6) 00:07:06.391 26214.400 - 26416.049: 100.0000% ( 5) 00:07:06.391 00:07:06.391 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:06.391 ============================================================================== 00:07:06.391 Range in us Cumulative IO count 00:07:06.391 5620.972 - 5646.178: 0.0163% ( 3) 00:07:06.391 5646.178 - 5671.385: 0.1361% ( 22) 00:07:06.391 5671.385 - 5696.591: 0.3158% ( 33) 00:07:06.391 5696.591 - 5721.797: 0.6098% ( 54) 00:07:06.391 5721.797 - 5747.003: 1.1215% ( 94) 00:07:06.391 5747.003 - 5772.209: 1.8674% ( 137) 00:07:06.391 5772.209 - 5797.415: 2.9072% ( 191) 00:07:06.391 5797.415 - 5822.622: 4.1050% ( 220) 00:07:06.391 5822.622 - 5847.828: 5.3953% ( 237) 00:07:06.391 5847.828 - 5873.034: 7.0449% ( 303) 00:07:06.391 5873.034 - 5898.240: 8.6672% ( 298) 00:07:06.391 5898.240 - 5923.446: 10.4312% ( 324) 00:07:06.391 5923.446 - 5948.652: 12.2332% ( 331) 00:07:06.391 5948.652 - 5973.858: 14.1877% ( 359) 00:07:06.391 5973.858 - 5999.065: 16.1531% ( 361) 00:07:06.391 5999.065 - 6024.271: 18.1511% ( 367) 00:07:06.391 6024.271 - 6049.477: 20.2417% ( 384) 00:07:06.391 6049.477 - 6074.683: 22.2071% ( 361) 00:07:06.391 6074.683 - 6099.889: 24.2051% ( 367) 00:07:06.391 6099.889 - 6125.095: 26.3175% ( 388) 00:07:06.391 6125.095 - 6150.302: 28.4517% ( 392) 00:07:06.391 6150.302 - 6175.508: 30.5858% ( 392) 00:07:06.391 6175.508 - 6200.714: 32.7036% ( 389) 00:07:06.391 6200.714 - 6225.920: 34.8160% ( 388) 00:07:06.391 6225.920 - 6251.126: 36.9610% ( 394) 00:07:06.391 6251.126 - 6276.332: 39.0788% ( 389) 00:07:06.391 6276.332 - 6301.538: 41.2565% ( 400) 00:07:06.391 6301.538 - 6326.745: 43.4451% ( 402) 00:07:06.391 6326.745 - 6351.951: 45.5684% ( 390) 00:07:06.391 6351.951 - 6377.157: 47.7461% ( 400) 00:07:06.391 6377.157 - 6402.363: 49.9401% ( 403) 00:07:06.391 6402.363 - 6427.569: 52.0743% ( 392) 00:07:06.391 6427.569 - 6452.775: 54.2574% ( 401) 00:07:06.391 6452.775 - 6503.188: 58.6781% ( 812) 00:07:06.391 6503.188 - 6553.600: 62.7395% ( 746) 00:07:06.391 6553.600 - 6604.012: 66.4961% ( 690) 00:07:06.391 6604.012 - 6654.425: 69.4523% ( 543) 00:07:06.391 6654.425 - 6704.837: 71.6300% ( 400) 00:07:06.391 6704.837 - 6755.249: 73.1163% ( 273) 00:07:06.391 6755.249 - 6805.662: 74.1888% ( 197) 00:07:06.391 6805.662 - 6856.074: 74.9946% ( 148) 00:07:06.391 6856.074 - 6906.486: 75.6805% ( 126) 00:07:06.391 6906.486 - 6956.898: 76.3393% ( 121) 00:07:06.391 6956.898 - 7007.311: 76.8783% ( 99) 00:07:06.391 7007.311 - 7057.723: 77.2975% ( 77) 00:07:06.391 7057.723 - 7108.135: 77.6187% ( 59) 00:07:06.391 7108.135 - 7158.548: 77.9617% ( 63) 00:07:06.391 7158.548 - 7208.960: 78.3155% ( 65) 00:07:06.391 7208.960 - 7259.372: 78.6041% ( 53) 00:07:06.391 7259.372 - 7309.785: 78.8763% ( 50) 00:07:06.391 7309.785 - 7360.197: 79.1431% ( 49) 00:07:06.391 7360.197 - 7410.609: 79.3608% ( 40) 00:07:06.391 7410.609 - 7461.022: 79.5623% ( 37) 00:07:06.391 7461.022 - 7511.434: 79.7692% ( 38) 00:07:06.391 7511.434 - 7561.846: 79.9978% ( 42) 00:07:06.391 7561.846 - 7612.258: 80.2210% ( 41) 00:07:06.391 7612.258 - 7662.671: 80.4443% ( 41) 00:07:06.391 7662.671 - 7713.083: 80.7328% ( 53) 00:07:06.391 7713.083 - 7763.495: 81.0595% ( 60) 00:07:06.391 7763.495 - 7813.908: 81.4514% ( 72) 00:07:06.391 7813.908 - 7864.320: 81.9251% ( 87) 00:07:06.391 7864.320 - 7914.732: 82.3280% ( 74) 00:07:06.391 7914.732 - 7965.145: 82.7744% ( 82) 00:07:06.391 7965.145 - 8015.557: 83.2154% ( 81) 00:07:06.391 8015.557 - 8065.969: 83.6346% ( 77) 00:07:06.391 8065.969 - 8116.382: 84.0647% ( 79) 00:07:06.391 8116.382 - 8166.794: 84.5111% ( 82) 00:07:06.391 8166.794 - 8217.206: 84.9194% ( 75) 00:07:06.391 8217.206 - 8267.618: 85.3060% ( 71) 00:07:06.391 8267.618 - 8318.031: 85.6816% ( 69) 00:07:06.391 8318.031 - 8368.443: 86.0409% ( 66) 00:07:06.391 8368.443 - 8418.855: 86.4111% ( 68) 00:07:06.391 8418.855 - 8469.268: 86.7596% ( 64) 00:07:06.391 8469.268 - 8519.680: 87.1026% ( 63) 00:07:06.391 8519.680 - 8570.092: 87.4292% ( 60) 00:07:06.391 8570.092 - 8620.505: 87.7178% ( 53) 00:07:06.391 8620.505 - 8670.917: 87.9083% ( 35) 00:07:06.391 8670.917 - 8721.329: 88.0608% ( 28) 00:07:06.391 8721.329 - 8771.742: 88.2404% ( 33) 00:07:06.391 8771.742 - 8822.154: 88.4419% ( 37) 00:07:06.391 8822.154 - 8872.566: 88.5943% ( 28) 00:07:06.391 8872.566 - 8922.978: 88.7740% ( 33) 00:07:06.391 8922.978 - 8973.391: 88.9427% ( 31) 00:07:06.391 8973.391 - 9023.803: 89.0897% ( 27) 00:07:06.391 9023.803 - 9074.215: 89.2258% ( 25) 00:07:06.391 9074.215 - 9124.628: 89.3946% ( 31) 00:07:06.391 9124.628 - 9175.040: 89.5851% ( 35) 00:07:06.391 9175.040 - 9225.452: 89.7920% ( 38) 00:07:06.391 9225.452 - 9275.865: 89.9935% ( 37) 00:07:06.391 9275.865 - 9326.277: 90.1895% ( 36) 00:07:06.391 9326.277 - 9376.689: 90.3909% ( 37) 00:07:06.391 9376.689 - 9427.102: 90.5869% ( 36) 00:07:06.391 9427.102 - 9477.514: 90.7720% ( 34) 00:07:06.391 9477.514 - 9527.926: 90.9571% ( 34) 00:07:06.391 9527.926 - 9578.338: 91.1313% ( 32) 00:07:06.391 9578.338 - 9628.751: 91.3382% ( 38) 00:07:06.391 9628.751 - 9679.163: 91.5287% ( 35) 00:07:06.391 9679.163 - 9729.575: 91.7520% ( 41) 00:07:06.391 9729.575 - 9779.988: 91.9970% ( 45) 00:07:06.391 9779.988 - 9830.400: 92.2311% ( 43) 00:07:06.391 9830.400 - 9880.812: 92.4706% ( 44) 00:07:06.391 9880.812 - 9931.225: 92.7101% ( 44) 00:07:06.391 9931.225 - 9981.637: 92.9497% ( 44) 00:07:06.391 9981.637 - 10032.049: 93.1675% ( 40) 00:07:06.391 10032.049 - 10082.462: 93.4125% ( 45) 00:07:06.391 10082.462 - 10132.874: 93.6302% ( 40) 00:07:06.391 10132.874 - 10183.286: 93.8426% ( 39) 00:07:06.391 10183.286 - 10233.698: 94.0113% ( 31) 00:07:06.391 10233.698 - 10284.111: 94.2128% ( 37) 00:07:06.391 10284.111 - 10334.523: 94.4251% ( 39) 00:07:06.391 10334.523 - 10384.935: 94.6537% ( 42) 00:07:06.391 10384.935 - 10435.348: 94.8606% ( 38) 00:07:06.391 10435.348 - 10485.760: 95.0621% ( 37) 00:07:06.391 10485.760 - 10536.172: 95.2744% ( 39) 00:07:06.391 10536.172 - 10586.585: 95.4813% ( 38) 00:07:06.391 10586.585 - 10636.997: 95.6882% ( 38) 00:07:06.391 10636.997 - 10687.409: 95.8515% ( 30) 00:07:06.391 10687.409 - 10737.822: 96.0529% ( 37) 00:07:06.391 10737.822 - 10788.234: 96.2380% ( 34) 00:07:06.391 10788.234 - 10838.646: 96.4286% ( 35) 00:07:06.391 10838.646 - 10889.058: 96.6137% ( 34) 00:07:06.391 10889.058 - 10939.471: 96.8097% ( 36) 00:07:06.391 10939.471 - 10989.883: 96.9730% ( 30) 00:07:06.391 10989.883 - 11040.295: 97.1309% ( 29) 00:07:06.391 11040.295 - 11090.708: 97.2833% ( 28) 00:07:06.391 11090.708 - 11141.120: 97.4303% ( 27) 00:07:06.391 11141.120 - 11191.532: 97.5555% ( 23) 00:07:06.391 11191.532 - 11241.945: 97.6644% ( 20) 00:07:06.391 11241.945 - 11292.357: 97.7733% ( 20) 00:07:06.391 11292.357 - 11342.769: 97.8604% ( 16) 00:07:06.391 11342.769 - 11393.182: 97.9530% ( 17) 00:07:06.391 11393.182 - 11443.594: 98.0128% ( 11) 00:07:06.391 11443.594 - 11494.006: 98.0782% ( 12) 00:07:06.391 11494.006 - 11544.418: 98.1108% ( 6) 00:07:06.391 11544.418 - 11594.831: 98.1435% ( 6) 00:07:06.391 11594.831 - 11645.243: 98.1598% ( 3) 00:07:06.391 11645.243 - 11695.655: 98.1816% ( 4) 00:07:06.391 11695.655 - 11746.068: 98.1980% ( 3) 00:07:06.391 11746.068 - 11796.480: 98.2197% ( 4) 00:07:06.391 11796.480 - 11846.892: 98.2415% ( 4) 00:07:06.391 11846.892 - 11897.305: 98.2633% ( 4) 00:07:06.391 11897.305 - 11947.717: 98.2905% ( 5) 00:07:06.391 11947.717 - 11998.129: 98.3068% ( 3) 00:07:06.391 11998.129 - 12048.542: 98.3286% ( 4) 00:07:06.391 12048.542 - 12098.954: 98.3504% ( 4) 00:07:06.391 12098.954 - 12149.366: 98.3722% ( 4) 00:07:06.391 12149.366 - 12199.778: 98.3939% ( 4) 00:07:06.391 12199.778 - 12250.191: 98.4103% ( 3) 00:07:06.391 12250.191 - 12300.603: 98.4321% ( 4) 00:07:06.391 12300.603 - 12351.015: 98.4538% ( 4) 00:07:06.391 12351.015 - 12401.428: 98.4702% ( 3) 00:07:06.391 12401.428 - 12451.840: 98.4919% ( 4) 00:07:06.391 12451.840 - 12502.252: 98.5137% ( 4) 00:07:06.391 12502.252 - 12552.665: 98.5355% ( 4) 00:07:06.391 12552.665 - 12603.077: 98.5518% ( 3) 00:07:06.392 12603.077 - 12653.489: 98.5736% ( 4) 00:07:06.392 12653.489 - 12703.902: 98.5954% ( 4) 00:07:06.392 12703.902 - 12754.314: 98.6063% ( 2) 00:07:06.392 12804.726 - 12855.138: 98.6117% ( 1) 00:07:06.392 12855.138 - 12905.551: 98.6172% ( 1) 00:07:06.392 12905.551 - 13006.375: 98.6444% ( 5) 00:07:06.392 13006.375 - 13107.200: 98.6716% ( 5) 00:07:06.392 13107.200 - 13208.025: 98.6879% ( 3) 00:07:06.392 13208.025 - 13308.849: 98.7097% ( 4) 00:07:06.392 13308.849 - 13409.674: 98.7369% ( 5) 00:07:06.392 13409.674 - 13510.498: 98.7642% ( 5) 00:07:06.392 13510.498 - 13611.323: 98.7914% ( 5) 00:07:06.392 13611.323 - 13712.148: 98.8349% ( 8) 00:07:06.392 13712.148 - 13812.972: 98.8730% ( 7) 00:07:06.392 13812.972 - 13913.797: 98.9166% ( 8) 00:07:06.392 13913.797 - 14014.622: 98.9765% ( 11) 00:07:06.392 14014.622 - 14115.446: 98.9928% ( 3) 00:07:06.392 14115.446 - 14216.271: 99.0146% ( 4) 00:07:06.392 14216.271 - 14317.095: 99.0364% ( 4) 00:07:06.392 14317.095 - 14417.920: 99.0581% ( 4) 00:07:06.392 14417.920 - 14518.745: 99.0799% ( 4) 00:07:06.392 14518.745 - 14619.569: 99.0963% ( 3) 00:07:06.392 14619.569 - 14720.394: 99.1180% ( 4) 00:07:06.392 14720.394 - 14821.218: 99.1344% ( 3) 00:07:06.392 14821.218 - 14922.043: 99.1561% ( 4) 00:07:06.392 14922.043 - 15022.868: 99.1725% ( 3) 00:07:06.392 15022.868 - 15123.692: 99.1943% ( 4) 00:07:06.392 15123.692 - 15224.517: 99.2160% ( 4) 00:07:06.392 15224.517 - 15325.342: 99.2378% ( 4) 00:07:06.392 15325.342 - 15426.166: 99.2596% ( 4) 00:07:06.392 15426.166 - 15526.991: 99.2814% ( 4) 00:07:06.392 15526.991 - 15627.815: 99.2977% ( 3) 00:07:06.392 15627.815 - 15728.640: 99.3031% ( 1) 00:07:06.392 17039.360 - 17140.185: 99.3140% ( 2) 00:07:06.392 17140.185 - 17241.009: 99.3358% ( 4) 00:07:06.392 17241.009 - 17341.834: 99.3630% ( 5) 00:07:06.392 17341.834 - 17442.658: 99.3848% ( 4) 00:07:06.392 17442.658 - 17543.483: 99.4066% ( 4) 00:07:06.392 17543.483 - 17644.308: 99.4284% ( 4) 00:07:06.392 17644.308 - 17745.132: 99.4501% ( 4) 00:07:06.392 17745.132 - 17845.957: 99.4719% ( 4) 00:07:06.392 17845.957 - 17946.782: 99.4991% ( 5) 00:07:06.392 17946.782 - 18047.606: 99.5209% ( 4) 00:07:06.392 18047.606 - 18148.431: 99.5427% ( 4) 00:07:06.392 18148.431 - 18249.255: 99.5645% ( 4) 00:07:06.392 18249.255 - 18350.080: 99.5862% ( 4) 00:07:06.392 18350.080 - 18450.905: 99.6135% ( 5) 00:07:06.392 18450.905 - 18551.729: 99.6352% ( 4) 00:07:06.392 18551.729 - 18652.554: 99.6516% ( 3) 00:07:06.392 23290.486 - 23391.311: 99.6625% ( 2) 00:07:06.392 23391.311 - 23492.135: 99.6788% ( 3) 00:07:06.392 23492.135 - 23592.960: 99.7006% ( 4) 00:07:06.392 23592.960 - 23693.785: 99.7169% ( 3) 00:07:06.392 23693.785 - 23794.609: 99.7332% ( 3) 00:07:06.392 23794.609 - 23895.434: 99.7496% ( 3) 00:07:06.392 23895.434 - 23996.258: 99.7659% ( 3) 00:07:06.392 23996.258 - 24097.083: 99.7877% ( 4) 00:07:06.392 24097.083 - 24197.908: 99.8040% ( 3) 00:07:06.392 24197.908 - 24298.732: 99.8258% ( 4) 00:07:06.392 24298.732 - 24399.557: 99.8421% ( 3) 00:07:06.392 24399.557 - 24500.382: 99.8584% ( 3) 00:07:06.392 24500.382 - 24601.206: 99.8748% ( 3) 00:07:06.392 24601.206 - 24702.031: 99.8966% ( 4) 00:07:06.392 24702.031 - 24802.855: 99.9129% ( 3) 00:07:06.392 24802.855 - 24903.680: 99.9347% ( 4) 00:07:06.392 24903.680 - 25004.505: 99.9510% ( 3) 00:07:06.392 25004.505 - 25105.329: 99.9728% ( 4) 00:07:06.392 25105.329 - 25206.154: 99.9891% ( 3) 00:07:06.392 25206.154 - 25306.978: 100.0000% ( 2) 00:07:06.392 00:07:06.392 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:06.392 ============================================================================== 00:07:06.392 Range in us Cumulative IO count 00:07:06.392 5595.766 - 5620.972: 0.0054% ( 1) 00:07:06.392 5620.972 - 5646.178: 0.0490% ( 8) 00:07:06.392 5646.178 - 5671.385: 0.1307% ( 15) 00:07:06.392 5671.385 - 5696.591: 0.3321% ( 37) 00:07:06.392 5696.591 - 5721.797: 0.6642% ( 61) 00:07:06.392 5721.797 - 5747.003: 1.2195% ( 102) 00:07:06.392 5747.003 - 5772.209: 1.9545% ( 135) 00:07:06.392 5772.209 - 5797.415: 2.9290% ( 179) 00:07:06.392 5797.415 - 5822.622: 4.0451% ( 205) 00:07:06.392 5822.622 - 5847.828: 5.3299% ( 236) 00:07:06.392 5847.828 - 5873.034: 6.9033% ( 289) 00:07:06.392 5873.034 - 5898.240: 8.5529% ( 303) 00:07:06.392 5898.240 - 5923.446: 10.3332% ( 327) 00:07:06.392 5923.446 - 5948.652: 12.0917% ( 323) 00:07:06.392 5948.652 - 5973.858: 14.0298% ( 356) 00:07:06.392 5973.858 - 5999.065: 15.9517% ( 353) 00:07:06.392 5999.065 - 6024.271: 17.9606% ( 369) 00:07:06.392 6024.271 - 6049.477: 19.9967% ( 374) 00:07:06.392 6049.477 - 6074.683: 22.0329% ( 374) 00:07:06.392 6074.683 - 6099.889: 24.1071% ( 381) 00:07:06.392 6099.889 - 6125.095: 26.1596% ( 377) 00:07:06.392 6125.095 - 6150.302: 28.2666% ( 387) 00:07:06.392 6150.302 - 6175.508: 30.3463% ( 382) 00:07:06.392 6175.508 - 6200.714: 32.4750% ( 391) 00:07:06.392 6200.714 - 6225.920: 34.5928% ( 389) 00:07:06.392 6225.920 - 6251.126: 36.7269% ( 392) 00:07:06.392 6251.126 - 6276.332: 38.9155% ( 402) 00:07:06.392 6276.332 - 6301.538: 41.1313% ( 407) 00:07:06.392 6301.538 - 6326.745: 43.2437% ( 388) 00:07:06.392 6326.745 - 6351.951: 45.4051% ( 397) 00:07:06.392 6351.951 - 6377.157: 47.6263% ( 408) 00:07:06.392 6377.157 - 6402.363: 49.8476% ( 408) 00:07:06.392 6402.363 - 6427.569: 52.0743% ( 409) 00:07:06.392 6427.569 - 6452.775: 54.3173% ( 412) 00:07:06.392 6452.775 - 6503.188: 58.7652% ( 817) 00:07:06.392 6503.188 - 6553.600: 62.9845% ( 775) 00:07:06.392 6553.600 - 6604.012: 66.6921% ( 681) 00:07:06.392 6604.012 - 6654.425: 69.6919% ( 551) 00:07:06.392 6654.425 - 6704.837: 71.8587% ( 398) 00:07:06.392 6704.837 - 6755.249: 73.4103% ( 285) 00:07:06.392 6755.249 - 6805.662: 74.5808% ( 215) 00:07:06.392 6805.662 - 6856.074: 75.4247% ( 155) 00:07:06.392 6856.074 - 6906.486: 76.1052% ( 125) 00:07:06.392 6906.486 - 6956.898: 76.7095% ( 111) 00:07:06.392 6956.898 - 7007.311: 77.2376% ( 97) 00:07:06.392 7007.311 - 7057.723: 77.7330% ( 91) 00:07:06.392 7057.723 - 7108.135: 78.1413% ( 75) 00:07:06.392 7108.135 - 7158.548: 78.5061% ( 67) 00:07:06.392 7158.548 - 7208.960: 78.8818% ( 69) 00:07:06.392 7208.960 - 7259.372: 79.1431% ( 48) 00:07:06.392 7259.372 - 7309.785: 79.4044% ( 48) 00:07:06.392 7309.785 - 7360.197: 79.6167% ( 39) 00:07:06.392 7360.197 - 7410.609: 79.8018% ( 34) 00:07:06.392 7410.609 - 7461.022: 79.9924% ( 35) 00:07:06.392 7461.022 - 7511.434: 80.1775% ( 34) 00:07:06.392 7511.434 - 7561.846: 80.3680% ( 35) 00:07:06.392 7561.846 - 7612.258: 80.5205% ( 28) 00:07:06.392 7612.258 - 7662.671: 80.7437% ( 41) 00:07:06.392 7662.671 - 7713.083: 80.9506% ( 38) 00:07:06.392 7713.083 - 7763.495: 81.2064% ( 47) 00:07:06.392 7763.495 - 7813.908: 81.5658% ( 66) 00:07:06.392 7813.908 - 7864.320: 81.9795% ( 76) 00:07:06.392 7864.320 - 7914.732: 82.4205% ( 81) 00:07:06.392 7914.732 - 7965.145: 82.8778% ( 84) 00:07:06.392 7965.145 - 8015.557: 83.2698% ( 72) 00:07:06.392 8015.557 - 8065.969: 83.6509% ( 70) 00:07:06.392 8065.969 - 8116.382: 84.0266% ( 69) 00:07:06.392 8116.382 - 8166.794: 84.4077% ( 70) 00:07:06.392 8166.794 - 8217.206: 84.7779% ( 68) 00:07:06.392 8217.206 - 8267.618: 85.1426% ( 67) 00:07:06.392 8267.618 - 8318.031: 85.4693% ( 60) 00:07:06.392 8318.031 - 8368.443: 85.8123% ( 63) 00:07:06.392 8368.443 - 8418.855: 86.1389% ( 60) 00:07:06.392 8418.855 - 8469.268: 86.4819% ( 63) 00:07:06.392 8469.268 - 8519.680: 86.8467% ( 67) 00:07:06.392 8519.680 - 8570.092: 87.1788% ( 61) 00:07:06.392 8570.092 - 8620.505: 87.4564% ( 51) 00:07:06.392 8620.505 - 8670.917: 87.6579% ( 37) 00:07:06.392 8670.917 - 8721.329: 87.8103% ( 28) 00:07:06.392 8721.329 - 8771.742: 88.0118% ( 37) 00:07:06.392 8771.742 - 8822.154: 88.1751% ( 30) 00:07:06.392 8822.154 - 8872.566: 88.3656% ( 35) 00:07:06.392 8872.566 - 8922.978: 88.5725% ( 38) 00:07:06.392 8922.978 - 8973.391: 88.8121% ( 44) 00:07:06.392 8973.391 - 9023.803: 89.0135% ( 37) 00:07:06.392 9023.803 - 9074.215: 89.2313% ( 40) 00:07:06.392 9074.215 - 9124.628: 89.4273% ( 36) 00:07:06.392 9124.628 - 9175.040: 89.6233% ( 36) 00:07:06.392 9175.040 - 9225.452: 89.8574% ( 43) 00:07:06.392 9225.452 - 9275.865: 90.0642% ( 38) 00:07:06.392 9275.865 - 9326.277: 90.2657% ( 37) 00:07:06.392 9326.277 - 9376.689: 90.4726% ( 38) 00:07:06.392 9376.689 - 9427.102: 90.6958% ( 41) 00:07:06.392 9427.102 - 9477.514: 90.9190% ( 41) 00:07:06.392 9477.514 - 9527.926: 91.1585% ( 44) 00:07:06.392 9527.926 - 9578.338: 91.3818% ( 41) 00:07:06.392 9578.338 - 9628.751: 91.6104% ( 42) 00:07:06.392 9628.751 - 9679.163: 91.8500% ( 44) 00:07:06.392 9679.163 - 9729.575: 92.0786% ( 42) 00:07:06.392 9729.575 - 9779.988: 92.2474% ( 31) 00:07:06.392 9779.988 - 9830.400: 92.4107% ( 30) 00:07:06.392 9830.400 - 9880.812: 92.5849% ( 32) 00:07:06.392 9880.812 - 9931.225: 92.7483% ( 30) 00:07:06.392 9931.225 - 9981.637: 92.8898% ( 26) 00:07:06.392 9981.637 - 10032.049: 93.0368% ( 27) 00:07:06.392 10032.049 - 10082.462: 93.1457% ( 20) 00:07:06.392 10082.462 - 10132.874: 93.2764% ( 24) 00:07:06.393 10132.874 - 10183.286: 93.4560% ( 33) 00:07:06.393 10183.286 - 10233.698: 93.6302% ( 32) 00:07:06.393 10233.698 - 10284.111: 93.8534% ( 41) 00:07:06.393 10284.111 - 10334.523: 94.0603% ( 38) 00:07:06.393 10334.523 - 10384.935: 94.2672% ( 38) 00:07:06.393 10384.935 - 10435.348: 94.4469% ( 33) 00:07:06.393 10435.348 - 10485.760: 94.6265% ( 33) 00:07:06.393 10485.760 - 10536.172: 94.8225% ( 36) 00:07:06.393 10536.172 - 10586.585: 95.0076% ( 34) 00:07:06.393 10586.585 - 10636.997: 95.1873% ( 33) 00:07:06.393 10636.997 - 10687.409: 95.4051% ( 40) 00:07:06.393 10687.409 - 10737.822: 95.6065% ( 37) 00:07:06.393 10737.822 - 10788.234: 95.8188% ( 39) 00:07:06.393 10788.234 - 10838.646: 96.0366% ( 40) 00:07:06.393 10838.646 - 10889.058: 96.2544% ( 40) 00:07:06.393 10889.058 - 10939.471: 96.4885% ( 43) 00:07:06.393 10939.471 - 10989.883: 96.7117% ( 41) 00:07:06.393 10989.883 - 11040.295: 96.9022% ( 35) 00:07:06.393 11040.295 - 11090.708: 97.0655% ( 30) 00:07:06.393 11090.708 - 11141.120: 97.2071% ( 26) 00:07:06.393 11141.120 - 11191.532: 97.3269% ( 22) 00:07:06.393 11191.532 - 11241.945: 97.4466% ( 22) 00:07:06.393 11241.945 - 11292.357: 97.5501% ( 19) 00:07:06.393 11292.357 - 11342.769: 97.6372% ( 16) 00:07:06.393 11342.769 - 11393.182: 97.7297% ( 17) 00:07:06.393 11393.182 - 11443.594: 97.8223% ( 17) 00:07:06.393 11443.594 - 11494.006: 97.9203% ( 18) 00:07:06.393 11494.006 - 11544.418: 97.9911% ( 13) 00:07:06.393 11544.418 - 11594.831: 98.0292% ( 7) 00:07:06.393 11594.831 - 11645.243: 98.0727% ( 8) 00:07:06.393 11645.243 - 11695.655: 98.1108% ( 7) 00:07:06.393 11695.655 - 11746.068: 98.1326% ( 4) 00:07:06.393 11746.068 - 11796.480: 98.1544% ( 4) 00:07:06.393 11796.480 - 11846.892: 98.1707% ( 3) 00:07:06.393 11846.892 - 11897.305: 98.1925% ( 4) 00:07:06.393 11897.305 - 11947.717: 98.2143% ( 4) 00:07:06.393 11947.717 - 11998.129: 98.2361% ( 4) 00:07:06.393 11998.129 - 12048.542: 98.2524% ( 3) 00:07:06.393 12048.542 - 12098.954: 98.2687% ( 3) 00:07:06.393 12098.954 - 12149.366: 98.2905% ( 4) 00:07:06.393 12149.366 - 12199.778: 98.3123% ( 4) 00:07:06.393 12199.778 - 12250.191: 98.3449% ( 6) 00:07:06.393 12250.191 - 12300.603: 98.3667% ( 4) 00:07:06.393 12300.603 - 12351.015: 98.3885% ( 4) 00:07:06.393 12351.015 - 12401.428: 98.4103% ( 4) 00:07:06.393 12401.428 - 12451.840: 98.4321% ( 4) 00:07:06.393 12451.840 - 12502.252: 98.4538% ( 4) 00:07:06.393 12502.252 - 12552.665: 98.4756% ( 4) 00:07:06.393 12552.665 - 12603.077: 98.4974% ( 4) 00:07:06.393 12603.077 - 12653.489: 98.5192% ( 4) 00:07:06.393 12653.489 - 12703.902: 98.5355% ( 3) 00:07:06.393 12703.902 - 12754.314: 98.5464% ( 2) 00:07:06.393 12754.314 - 12804.726: 98.5573% ( 2) 00:07:06.393 12804.726 - 12855.138: 98.5682% ( 2) 00:07:06.393 12855.138 - 12905.551: 98.5791% ( 2) 00:07:06.393 12905.551 - 13006.375: 98.5954% ( 3) 00:07:06.393 13006.375 - 13107.200: 98.6063% ( 2) 00:07:06.393 13107.200 - 13208.025: 98.6335% ( 5) 00:07:06.393 13208.025 - 13308.849: 98.6770% ( 8) 00:07:06.393 13308.849 - 13409.674: 98.7206% ( 8) 00:07:06.393 13409.674 - 13510.498: 98.7642% ( 8) 00:07:06.393 13510.498 - 13611.323: 98.8077% ( 8) 00:07:06.393 13611.323 - 13712.148: 98.8458% ( 7) 00:07:06.393 13712.148 - 13812.972: 98.8839% ( 7) 00:07:06.393 13812.972 - 13913.797: 98.9275% ( 8) 00:07:06.393 13913.797 - 14014.622: 98.9547% ( 5) 00:07:06.393 14317.095 - 14417.920: 98.9765% ( 4) 00:07:06.393 14417.920 - 14518.745: 98.9928% ( 3) 00:07:06.393 14518.745 - 14619.569: 99.0146% ( 4) 00:07:06.393 14619.569 - 14720.394: 99.0309% ( 3) 00:07:06.393 14720.394 - 14821.218: 99.0527% ( 4) 00:07:06.393 14821.218 - 14922.043: 99.0745% ( 4) 00:07:06.393 14922.043 - 15022.868: 99.0963% ( 4) 00:07:06.393 15022.868 - 15123.692: 99.1126% ( 3) 00:07:06.393 15123.692 - 15224.517: 99.1344% ( 4) 00:07:06.393 15224.517 - 15325.342: 99.1561% ( 4) 00:07:06.393 15325.342 - 15426.166: 99.1779% ( 4) 00:07:06.393 15426.166 - 15526.991: 99.2051% ( 5) 00:07:06.393 15526.991 - 15627.815: 99.2487% ( 8) 00:07:06.393 15627.815 - 15728.640: 99.2977% ( 9) 00:07:06.393 15728.640 - 15829.465: 99.3358% ( 7) 00:07:06.393 15829.465 - 15930.289: 99.3794% ( 8) 00:07:06.393 15930.289 - 16031.114: 99.4229% ( 8) 00:07:06.393 16031.114 - 16131.938: 99.4447% ( 4) 00:07:06.393 16131.938 - 16232.763: 99.4665% ( 4) 00:07:06.393 16232.763 - 16333.588: 99.4937% ( 5) 00:07:06.393 16333.588 - 16434.412: 99.5100% ( 3) 00:07:06.393 16434.412 - 16535.237: 99.5372% ( 5) 00:07:06.393 16535.237 - 16636.062: 99.5590% ( 4) 00:07:06.393 16636.062 - 16736.886: 99.5808% ( 4) 00:07:06.393 16736.886 - 16837.711: 99.6026% ( 4) 00:07:06.393 16837.711 - 16938.535: 99.6243% ( 4) 00:07:06.393 16938.535 - 17039.360: 99.6407% ( 3) 00:07:06.393 17039.360 - 17140.185: 99.6516% ( 2) 00:07:06.393 21878.942 - 21979.766: 99.6570% ( 1) 00:07:06.393 21979.766 - 22080.591: 99.6733% ( 3) 00:07:06.393 22080.591 - 22181.415: 99.6951% ( 4) 00:07:06.393 22181.415 - 22282.240: 99.7115% ( 3) 00:07:06.393 22282.240 - 22383.065: 99.7332% ( 4) 00:07:06.393 22383.065 - 22483.889: 99.7496% ( 3) 00:07:06.393 22483.889 - 22584.714: 99.7659% ( 3) 00:07:06.393 22584.714 - 22685.538: 99.7877% ( 4) 00:07:06.393 22685.538 - 22786.363: 99.8040% ( 3) 00:07:06.393 22786.363 - 22887.188: 99.8203% ( 3) 00:07:06.393 22887.188 - 22988.012: 99.8367% ( 3) 00:07:06.393 22988.012 - 23088.837: 99.8530% ( 3) 00:07:06.393 23088.837 - 23189.662: 99.8748% ( 4) 00:07:06.393 23189.662 - 23290.486: 99.8911% ( 3) 00:07:06.393 23290.486 - 23391.311: 99.9074% ( 3) 00:07:06.393 23391.311 - 23492.135: 99.9292% ( 4) 00:07:06.393 23492.135 - 23592.960: 99.9456% ( 3) 00:07:06.393 23592.960 - 23693.785: 99.9619% ( 3) 00:07:06.393 23693.785 - 23794.609: 99.9837% ( 4) 00:07:06.393 23794.609 - 23895.434: 100.0000% ( 3) 00:07:06.393 00:07:06.393 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:06.393 ============================================================================== 00:07:06.393 Range in us Cumulative IO count 00:07:06.393 5595.766 - 5620.972: 0.0109% ( 2) 00:07:06.393 5620.972 - 5646.178: 0.0381% ( 5) 00:07:06.393 5646.178 - 5671.385: 0.0926% ( 10) 00:07:06.393 5671.385 - 5696.591: 0.3648% ( 50) 00:07:06.393 5696.591 - 5721.797: 0.7459% ( 70) 00:07:06.393 5721.797 - 5747.003: 1.2631% ( 95) 00:07:06.393 5747.003 - 5772.209: 2.0579% ( 146) 00:07:06.393 5772.209 - 5797.415: 3.0161% ( 176) 00:07:06.393 5797.415 - 5822.622: 4.1213% ( 203) 00:07:06.393 5822.622 - 5847.828: 5.3571% ( 227) 00:07:06.393 5847.828 - 5873.034: 6.9251% ( 288) 00:07:06.393 5873.034 - 5898.240: 8.6618% ( 319) 00:07:06.393 5898.240 - 5923.446: 10.3767% ( 315) 00:07:06.393 5923.446 - 5948.652: 12.1298% ( 322) 00:07:06.393 5948.652 - 5973.858: 14.0571% ( 354) 00:07:06.393 5973.858 - 5999.065: 16.0061% ( 358) 00:07:06.393 5999.065 - 6024.271: 17.9932% ( 365) 00:07:06.393 6024.271 - 6049.477: 19.9858% ( 366) 00:07:06.393 6049.477 - 6074.683: 21.9131% ( 354) 00:07:06.393 6074.683 - 6099.889: 23.9656% ( 377) 00:07:06.393 6099.889 - 6125.095: 26.0834% ( 389) 00:07:06.393 6125.095 - 6150.302: 28.1468% ( 379) 00:07:06.393 6150.302 - 6175.508: 30.2483% ( 386) 00:07:06.393 6175.508 - 6200.714: 32.3497% ( 386) 00:07:06.393 6200.714 - 6225.920: 34.4349% ( 383) 00:07:06.393 6225.920 - 6251.126: 36.5690% ( 392) 00:07:06.393 6251.126 - 6276.332: 38.7358% ( 398) 00:07:06.393 6276.332 - 6301.538: 40.9462% ( 406) 00:07:06.393 6301.538 - 6326.745: 43.1348% ( 402) 00:07:06.393 6326.745 - 6351.951: 45.3288% ( 403) 00:07:06.393 6351.951 - 6377.157: 47.4848% ( 396) 00:07:06.393 6377.157 - 6402.363: 49.7223% ( 411) 00:07:06.393 6402.363 - 6427.569: 51.9763% ( 414) 00:07:06.393 6427.569 - 6452.775: 54.2683% ( 421) 00:07:06.393 6452.775 - 6503.188: 58.7271% ( 819) 00:07:06.393 6503.188 - 6553.600: 62.8648% ( 760) 00:07:06.393 6553.600 - 6604.012: 66.5124% ( 670) 00:07:06.393 6604.012 - 6654.425: 69.5122% ( 551) 00:07:06.393 6654.425 - 6704.837: 71.7008% ( 402) 00:07:06.393 6704.837 - 6755.249: 73.3014% ( 294) 00:07:06.393 6755.249 - 6805.662: 74.4882% ( 218) 00:07:06.393 6805.662 - 6856.074: 75.4083% ( 169) 00:07:06.393 6856.074 - 6906.486: 76.1433% ( 135) 00:07:06.393 6906.486 - 6956.898: 76.7748% ( 116) 00:07:06.393 6956.898 - 7007.311: 77.3846% ( 112) 00:07:06.393 7007.311 - 7057.723: 77.9290% ( 100) 00:07:06.394 7057.723 - 7108.135: 78.3645% ( 80) 00:07:06.394 7108.135 - 7158.548: 78.7674% ( 74) 00:07:06.394 7158.548 - 7208.960: 79.1159% ( 64) 00:07:06.394 7208.960 - 7259.372: 79.3663% ( 46) 00:07:06.394 7259.372 - 7309.785: 79.5623% ( 36) 00:07:06.394 7309.785 - 7360.197: 79.7256% ( 30) 00:07:06.394 7360.197 - 7410.609: 79.9325% ( 38) 00:07:06.394 7410.609 - 7461.022: 80.1503% ( 40) 00:07:06.394 7461.022 - 7511.434: 80.3463% ( 36) 00:07:06.394 7511.434 - 7561.846: 80.5150% ( 31) 00:07:06.394 7561.846 - 7612.258: 80.6729% ( 29) 00:07:06.394 7612.258 - 7662.671: 80.8308% ( 29) 00:07:06.394 7662.671 - 7713.083: 81.0105% ( 33) 00:07:06.394 7713.083 - 7763.495: 81.2119% ( 37) 00:07:06.394 7763.495 - 7813.908: 81.4895% ( 51) 00:07:06.394 7813.908 - 7864.320: 81.8325% ( 63) 00:07:06.394 7864.320 - 7914.732: 82.1646% ( 61) 00:07:06.394 7914.732 - 7965.145: 82.5185% ( 65) 00:07:06.394 7965.145 - 8015.557: 82.8669% ( 64) 00:07:06.394 8015.557 - 8065.969: 83.2099% ( 63) 00:07:06.394 8065.969 - 8116.382: 83.5910% ( 70) 00:07:06.394 8116.382 - 8166.794: 84.0483% ( 84) 00:07:06.394 8166.794 - 8217.206: 84.4730% ( 78) 00:07:06.394 8217.206 - 8267.618: 84.8378% ( 67) 00:07:06.394 8267.618 - 8318.031: 85.2189% ( 70) 00:07:06.394 8318.031 - 8368.443: 85.5891% ( 68) 00:07:06.394 8368.443 - 8418.855: 85.9647% ( 69) 00:07:06.394 8418.855 - 8469.268: 86.3513% ( 71) 00:07:06.394 8469.268 - 8519.680: 86.7106% ( 66) 00:07:06.394 8519.680 - 8570.092: 87.0590% ( 64) 00:07:06.394 8570.092 - 8620.505: 87.3693% ( 57) 00:07:06.394 8620.505 - 8670.917: 87.5980% ( 42) 00:07:06.394 8670.917 - 8721.329: 87.8375% ( 44) 00:07:06.394 8721.329 - 8771.742: 88.0771% ( 44) 00:07:06.394 8771.742 - 8822.154: 88.2731% ( 36) 00:07:06.394 8822.154 - 8872.566: 88.5072% ( 43) 00:07:06.394 8872.566 - 8922.978: 88.7522% ( 45) 00:07:06.394 8922.978 - 8973.391: 89.0298% ( 51) 00:07:06.394 8973.391 - 9023.803: 89.2694% ( 44) 00:07:06.394 9023.803 - 9074.215: 89.5198% ( 46) 00:07:06.394 9074.215 - 9124.628: 89.7430% ( 41) 00:07:06.394 9124.628 - 9175.040: 89.9554% ( 39) 00:07:06.394 9175.040 - 9225.452: 90.1786% ( 41) 00:07:06.394 9225.452 - 9275.865: 90.3909% ( 39) 00:07:06.394 9275.865 - 9326.277: 90.5760% ( 34) 00:07:06.394 9326.277 - 9376.689: 90.7557% ( 33) 00:07:06.394 9376.689 - 9427.102: 90.9353% ( 33) 00:07:06.394 9427.102 - 9477.514: 91.1259% ( 35) 00:07:06.394 9477.514 - 9527.926: 91.3219% ( 36) 00:07:06.394 9527.926 - 9578.338: 91.5233% ( 37) 00:07:06.394 9578.338 - 9628.751: 91.7465% ( 41) 00:07:06.394 9628.751 - 9679.163: 91.9588% ( 39) 00:07:06.394 9679.163 - 9729.575: 92.1603% ( 37) 00:07:06.394 9729.575 - 9779.988: 92.3726% ( 39) 00:07:06.394 9779.988 - 9830.400: 92.5632% ( 35) 00:07:06.394 9830.400 - 9880.812: 92.7101% ( 27) 00:07:06.394 9880.812 - 9931.225: 92.8626% ( 28) 00:07:06.394 9931.225 - 9981.637: 92.9715% ( 20) 00:07:06.394 9981.637 - 10032.049: 93.0858% ( 21) 00:07:06.394 10032.049 - 10082.462: 93.2056% ( 22) 00:07:06.394 10082.462 - 10132.874: 93.3308% ( 23) 00:07:06.394 10132.874 - 10183.286: 93.4615% ( 24) 00:07:06.394 10183.286 - 10233.698: 93.5921% ( 24) 00:07:06.394 10233.698 - 10284.111: 93.7609% ( 31) 00:07:06.394 10284.111 - 10334.523: 93.9351% ( 32) 00:07:06.394 10334.523 - 10384.935: 94.1202% ( 34) 00:07:06.394 10384.935 - 10435.348: 94.3216% ( 37) 00:07:06.394 10435.348 - 10485.760: 94.4959% ( 32) 00:07:06.394 10485.760 - 10536.172: 94.7136% ( 40) 00:07:06.394 10536.172 - 10586.585: 94.9096% ( 36) 00:07:06.394 10586.585 - 10636.997: 95.1165% ( 38) 00:07:06.394 10636.997 - 10687.409: 95.3125% ( 36) 00:07:06.394 10687.409 - 10737.822: 95.4976% ( 34) 00:07:06.394 10737.822 - 10788.234: 95.6555% ( 29) 00:07:06.394 10788.234 - 10838.646: 95.8406% ( 34) 00:07:06.394 10838.646 - 10889.058: 96.0529% ( 39) 00:07:06.394 10889.058 - 10939.471: 96.2598% ( 38) 00:07:06.394 10939.471 - 10989.883: 96.4667% ( 38) 00:07:06.394 10989.883 - 11040.295: 96.6627% ( 36) 00:07:06.394 11040.295 - 11090.708: 96.8587% ( 36) 00:07:06.394 11090.708 - 11141.120: 97.0274% ( 31) 00:07:06.394 11141.120 - 11191.532: 97.1853% ( 29) 00:07:06.394 11191.532 - 11241.945: 97.3269% ( 26) 00:07:06.394 11241.945 - 11292.357: 97.4521% ( 23) 00:07:06.394 11292.357 - 11342.769: 97.5501% ( 18) 00:07:06.394 11342.769 - 11393.182: 97.6426% ( 17) 00:07:06.394 11393.182 - 11443.594: 97.7025% ( 11) 00:07:06.394 11443.594 - 11494.006: 97.7787% ( 14) 00:07:06.394 11494.006 - 11544.418: 97.8604% ( 15) 00:07:06.394 11544.418 - 11594.831: 97.9366% ( 14) 00:07:06.394 11594.831 - 11645.243: 98.0074% ( 13) 00:07:06.394 11645.243 - 11695.655: 98.0891% ( 15) 00:07:06.394 11695.655 - 11746.068: 98.1217% ( 6) 00:07:06.394 11746.068 - 11796.480: 98.1381% ( 3) 00:07:06.394 11796.480 - 11846.892: 98.1598% ( 4) 00:07:06.394 11846.892 - 11897.305: 98.1816% ( 4) 00:07:06.394 11897.305 - 11947.717: 98.2034% ( 4) 00:07:06.394 11947.717 - 11998.129: 98.2197% ( 3) 00:07:06.394 11998.129 - 12048.542: 98.2306% ( 2) 00:07:06.394 12048.542 - 12098.954: 98.2415% ( 2) 00:07:06.394 12098.954 - 12149.366: 98.2470% ( 1) 00:07:06.394 12149.366 - 12199.778: 98.2578% ( 2) 00:07:06.394 12250.191 - 12300.603: 98.2633% ( 1) 00:07:06.394 12300.603 - 12351.015: 98.2742% ( 2) 00:07:06.394 12351.015 - 12401.428: 98.2851% ( 2) 00:07:06.394 12401.428 - 12451.840: 98.3123% ( 5) 00:07:06.394 12451.840 - 12502.252: 98.3395% ( 5) 00:07:06.394 12502.252 - 12552.665: 98.3613% ( 4) 00:07:06.394 12552.665 - 12603.077: 98.3776% ( 3) 00:07:06.394 12603.077 - 12653.489: 98.3994% ( 4) 00:07:06.394 12653.489 - 12703.902: 98.4212% ( 4) 00:07:06.394 12703.902 - 12754.314: 98.4429% ( 4) 00:07:06.394 12754.314 - 12804.726: 98.4647% ( 4) 00:07:06.394 12804.726 - 12855.138: 98.4865% ( 4) 00:07:06.394 12855.138 - 12905.551: 98.5083% ( 4) 00:07:06.394 12905.551 - 13006.375: 98.5573% ( 9) 00:07:06.394 13006.375 - 13107.200: 98.6063% ( 9) 00:07:06.394 13107.200 - 13208.025: 98.6498% ( 8) 00:07:06.394 13208.025 - 13308.849: 98.6825% ( 6) 00:07:06.394 13308.849 - 13409.674: 98.7260% ( 8) 00:07:06.394 13409.674 - 13510.498: 98.7587% ( 6) 00:07:06.394 13510.498 - 13611.323: 98.8077% ( 9) 00:07:06.394 13611.323 - 13712.148: 98.8458% ( 7) 00:07:06.394 13712.148 - 13812.972: 98.8894% ( 8) 00:07:06.394 13812.972 - 13913.797: 98.9384% ( 9) 00:07:06.394 13913.797 - 14014.622: 98.9819% ( 8) 00:07:06.394 14014.622 - 14115.446: 98.9983% ( 3) 00:07:06.394 14115.446 - 14216.271: 99.0146% ( 3) 00:07:06.394 14216.271 - 14317.095: 99.0364% ( 4) 00:07:06.394 14317.095 - 14417.920: 99.0527% ( 3) 00:07:06.394 14417.920 - 14518.745: 99.0745% ( 4) 00:07:06.394 14518.745 - 14619.569: 99.0854% ( 2) 00:07:06.394 14619.569 - 14720.394: 99.1071% ( 4) 00:07:06.394 14720.394 - 14821.218: 99.1344% ( 5) 00:07:06.394 14821.218 - 14922.043: 99.1779% ( 8) 00:07:06.394 14922.043 - 15022.868: 99.2106% ( 6) 00:07:06.394 15022.868 - 15123.692: 99.2487% ( 7) 00:07:06.394 15123.692 - 15224.517: 99.2922% ( 8) 00:07:06.394 15224.517 - 15325.342: 99.3304% ( 7) 00:07:06.394 15325.342 - 15426.166: 99.3685% ( 7) 00:07:06.394 15426.166 - 15526.991: 99.4120% ( 8) 00:07:06.394 15526.991 - 15627.815: 99.4501% ( 7) 00:07:06.394 15627.815 - 15728.640: 99.4828% ( 6) 00:07:06.394 15728.640 - 15829.465: 99.5209% ( 7) 00:07:06.394 15829.465 - 15930.289: 99.5372% ( 3) 00:07:06.394 15930.289 - 16031.114: 99.5536% ( 3) 00:07:06.394 16031.114 - 16131.938: 99.5645% ( 2) 00:07:06.394 16131.938 - 16232.763: 99.5862% ( 4) 00:07:06.394 16232.763 - 16333.588: 99.6026% ( 3) 00:07:06.394 16333.588 - 16434.412: 99.6352% ( 6) 00:07:06.394 16434.412 - 16535.237: 99.6516% ( 3) 00:07:06.394 20568.222 - 20669.046: 99.6679% ( 3) 00:07:06.394 20669.046 - 20769.871: 99.6897% ( 4) 00:07:06.394 20769.871 - 20870.695: 99.7060% ( 3) 00:07:06.394 20870.695 - 20971.520: 99.7278% ( 4) 00:07:06.394 20971.520 - 21072.345: 99.7387% ( 2) 00:07:06.394 21072.345 - 21173.169: 99.7605% ( 4) 00:07:06.394 21173.169 - 21273.994: 99.7768% ( 3) 00:07:06.394 21273.994 - 21374.818: 99.7986% ( 4) 00:07:06.394 21374.818 - 21475.643: 99.8149% ( 3) 00:07:06.394 21475.643 - 21576.468: 99.8312% ( 3) 00:07:06.394 21576.468 - 21677.292: 99.8476% ( 3) 00:07:06.394 21677.292 - 21778.117: 99.8693% ( 4) 00:07:06.394 21778.117 - 21878.942: 99.8857% ( 3) 00:07:06.394 21878.942 - 21979.766: 99.9020% ( 3) 00:07:06.394 21979.766 - 22080.591: 99.9238% ( 4) 00:07:06.394 22080.591 - 22181.415: 99.9401% ( 3) 00:07:06.394 22181.415 - 22282.240: 99.9564% ( 3) 00:07:06.394 22282.240 - 22383.065: 99.9728% ( 3) 00:07:06.394 22383.065 - 22483.889: 99.9946% ( 4) 00:07:06.394 22483.889 - 22584.714: 100.0000% ( 1) 00:07:06.394 00:07:06.394 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:06.394 ============================================================================== 00:07:06.394 Range in us Cumulative IO count 00:07:06.394 5595.766 - 5620.972: 0.0054% ( 1) 00:07:06.394 5620.972 - 5646.178: 0.0871% ( 15) 00:07:06.394 5646.178 - 5671.385: 0.2123% ( 23) 00:07:06.394 5671.385 - 5696.591: 0.4628% ( 46) 00:07:06.394 5696.591 - 5721.797: 0.8166% ( 65) 00:07:06.394 5721.797 - 5747.003: 1.3393% ( 96) 00:07:06.394 5747.003 - 5772.209: 2.0960% ( 139) 00:07:06.394 5772.209 - 5797.415: 3.0433% ( 174) 00:07:06.394 5797.415 - 5822.622: 4.1757% ( 208) 00:07:06.394 5822.622 - 5847.828: 5.5041% ( 244) 00:07:06.394 5847.828 - 5873.034: 7.0340% ( 281) 00:07:06.394 5873.034 - 5898.240: 8.6672% ( 300) 00:07:06.394 5898.240 - 5923.446: 10.4421% ( 326) 00:07:06.395 5923.446 - 5948.652: 12.1897% ( 321) 00:07:06.395 5948.652 - 5973.858: 13.9972% ( 332) 00:07:06.395 5973.858 - 5999.065: 16.0279% ( 373) 00:07:06.395 5999.065 - 6024.271: 18.0259% ( 367) 00:07:06.395 6024.271 - 6049.477: 19.9913% ( 361) 00:07:06.395 6049.477 - 6074.683: 21.9403% ( 358) 00:07:06.395 6074.683 - 6099.889: 23.9438% ( 368) 00:07:06.395 6099.889 - 6125.095: 26.0235% ( 382) 00:07:06.395 6125.095 - 6150.302: 28.1250% ( 386) 00:07:06.395 6150.302 - 6175.508: 30.2646% ( 393) 00:07:06.395 6175.508 - 6200.714: 32.3606% ( 385) 00:07:06.395 6200.714 - 6225.920: 34.4784% ( 389) 00:07:06.395 6225.920 - 6251.126: 36.6888% ( 406) 00:07:06.395 6251.126 - 6276.332: 38.8665% ( 400) 00:07:06.395 6276.332 - 6301.538: 40.9190% ( 377) 00:07:06.395 6301.538 - 6326.745: 43.1239% ( 405) 00:07:06.395 6326.745 - 6351.951: 45.3343% ( 406) 00:07:06.395 6351.951 - 6377.157: 47.5174% ( 401) 00:07:06.395 6377.157 - 6402.363: 49.8312% ( 425) 00:07:06.395 6402.363 - 6427.569: 52.0851% ( 414) 00:07:06.395 6427.569 - 6452.775: 54.2955% ( 406) 00:07:06.395 6452.775 - 6503.188: 58.7054% ( 810) 00:07:06.395 6503.188 - 6553.600: 62.8103% ( 754) 00:07:06.395 6553.600 - 6604.012: 66.4144% ( 662) 00:07:06.395 6604.012 - 6654.425: 69.3979% ( 548) 00:07:06.395 6654.425 - 6704.837: 71.6082% ( 406) 00:07:06.395 6704.837 - 6755.249: 73.1217% ( 278) 00:07:06.395 6755.249 - 6805.662: 74.2541% ( 208) 00:07:06.395 6805.662 - 6856.074: 75.1797% ( 170) 00:07:06.395 6856.074 - 6906.486: 75.8983% ( 132) 00:07:06.395 6906.486 - 6956.898: 76.5353% ( 117) 00:07:06.395 6956.898 - 7007.311: 77.0797% ( 100) 00:07:06.395 7007.311 - 7057.723: 77.5915% ( 94) 00:07:06.395 7057.723 - 7108.135: 78.0324% ( 81) 00:07:06.395 7108.135 - 7158.548: 78.4027% ( 68) 00:07:06.395 7158.548 - 7208.960: 78.6912% ( 53) 00:07:06.395 7208.960 - 7259.372: 78.9797% ( 53) 00:07:06.395 7259.372 - 7309.785: 79.1703% ( 35) 00:07:06.395 7309.785 - 7360.197: 79.3336% ( 30) 00:07:06.395 7360.197 - 7410.609: 79.5351% ( 37) 00:07:06.395 7410.609 - 7461.022: 79.7583% ( 41) 00:07:06.395 7461.022 - 7511.434: 79.9379% ( 33) 00:07:06.395 7511.434 - 7561.846: 80.1339% ( 36) 00:07:06.395 7561.846 - 7612.258: 80.3354% ( 37) 00:07:06.395 7612.258 - 7662.671: 80.5531% ( 40) 00:07:06.395 7662.671 - 7713.083: 80.7872% ( 43) 00:07:06.395 7713.083 - 7763.495: 81.0976% ( 57) 00:07:06.395 7763.495 - 7813.908: 81.4623% ( 67) 00:07:06.395 7813.908 - 7864.320: 81.8598% ( 73) 00:07:06.395 7864.320 - 7914.732: 82.2626% ( 74) 00:07:06.395 7914.732 - 7965.145: 82.6709% ( 75) 00:07:06.395 7965.145 - 8015.557: 83.0194% ( 64) 00:07:06.395 8015.557 - 8065.969: 83.4005% ( 70) 00:07:06.395 8065.969 - 8116.382: 83.7435% ( 63) 00:07:06.395 8116.382 - 8166.794: 84.1518% ( 75) 00:07:06.395 8166.794 - 8217.206: 84.5547% ( 74) 00:07:06.395 8217.206 - 8267.618: 84.9194% ( 67) 00:07:06.395 8267.618 - 8318.031: 85.2679% ( 64) 00:07:06.395 8318.031 - 8368.443: 85.6326% ( 67) 00:07:06.395 8368.443 - 8418.855: 86.0301% ( 73) 00:07:06.395 8418.855 - 8469.268: 86.3948% ( 67) 00:07:06.395 8469.268 - 8519.680: 86.7541% ( 66) 00:07:06.395 8519.680 - 8570.092: 87.1026% ( 64) 00:07:06.395 8570.092 - 8620.505: 87.4129% ( 57) 00:07:06.395 8620.505 - 8670.917: 87.6524% ( 44) 00:07:06.395 8670.917 - 8721.329: 87.8865% ( 43) 00:07:06.395 8721.329 - 8771.742: 88.1152% ( 42) 00:07:06.395 8771.742 - 8822.154: 88.3656% ( 46) 00:07:06.395 8822.154 - 8872.566: 88.6215% ( 47) 00:07:06.395 8872.566 - 8922.978: 88.8937% ( 50) 00:07:06.395 8922.978 - 8973.391: 89.1768% ( 52) 00:07:06.395 8973.391 - 9023.803: 89.4490% ( 50) 00:07:06.395 9023.803 - 9074.215: 89.7104% ( 48) 00:07:06.395 9074.215 - 9124.628: 89.9445% ( 43) 00:07:06.395 9124.628 - 9175.040: 90.1459% ( 37) 00:07:06.395 9175.040 - 9225.452: 90.3256% ( 33) 00:07:06.395 9225.452 - 9275.865: 90.5107% ( 34) 00:07:06.395 9275.865 - 9326.277: 90.6958% ( 34) 00:07:06.395 9326.277 - 9376.689: 90.9081% ( 39) 00:07:06.395 9376.689 - 9427.102: 91.0660% ( 29) 00:07:06.395 9427.102 - 9477.514: 91.2021% ( 25) 00:07:06.395 9477.514 - 9527.926: 91.3818% ( 33) 00:07:06.395 9527.926 - 9578.338: 91.5505% ( 31) 00:07:06.395 9578.338 - 9628.751: 91.7030% ( 28) 00:07:06.395 9628.751 - 9679.163: 91.8500% ( 27) 00:07:06.395 9679.163 - 9729.575: 92.0133% ( 30) 00:07:06.395 9729.575 - 9779.988: 92.1821% ( 31) 00:07:06.395 9779.988 - 9830.400: 92.3454% ( 30) 00:07:06.395 9830.400 - 9880.812: 92.5468% ( 37) 00:07:06.395 9880.812 - 9931.225: 92.7265% ( 33) 00:07:06.395 9931.225 - 9981.637: 92.8898% ( 30) 00:07:06.395 9981.637 - 10032.049: 93.0586% ( 31) 00:07:06.395 10032.049 - 10082.462: 93.2274% ( 31) 00:07:06.395 10082.462 - 10132.874: 93.3798% ( 28) 00:07:06.395 10132.874 - 10183.286: 93.5595% ( 33) 00:07:06.395 10183.286 - 10233.698: 93.7228% ( 30) 00:07:06.395 10233.698 - 10284.111: 93.9024% ( 33) 00:07:06.395 10284.111 - 10334.523: 94.0930% ( 35) 00:07:06.395 10334.523 - 10384.935: 94.2726% ( 33) 00:07:06.395 10384.935 - 10435.348: 94.4414% ( 31) 00:07:06.395 10435.348 - 10485.760: 94.6211% ( 33) 00:07:06.395 10485.760 - 10536.172: 94.8007% ( 33) 00:07:06.395 10536.172 - 10586.585: 94.9967% ( 36) 00:07:06.395 10586.585 - 10636.997: 95.1927% ( 36) 00:07:06.395 10636.997 - 10687.409: 95.4159% ( 41) 00:07:06.395 10687.409 - 10737.822: 95.6065% ( 35) 00:07:06.395 10737.822 - 10788.234: 95.7861% ( 33) 00:07:06.395 10788.234 - 10838.646: 95.9876% ( 37) 00:07:06.395 10838.646 - 10889.058: 96.1890% ( 37) 00:07:06.395 10889.058 - 10939.471: 96.4014% ( 39) 00:07:06.395 10939.471 - 10989.883: 96.5756% ( 32) 00:07:06.395 10989.883 - 11040.295: 96.7498% ( 32) 00:07:06.395 11040.295 - 11090.708: 96.9077% ( 29) 00:07:06.395 11090.708 - 11141.120: 97.0274% ( 22) 00:07:06.395 11141.120 - 11191.532: 97.1799% ( 28) 00:07:06.395 11191.532 - 11241.945: 97.2779% ( 18) 00:07:06.395 11241.945 - 11292.357: 97.4085% ( 24) 00:07:06.395 11292.357 - 11342.769: 97.5065% ( 18) 00:07:06.395 11342.769 - 11393.182: 97.5882% ( 15) 00:07:06.395 11393.182 - 11443.594: 97.6916% ( 19) 00:07:06.395 11443.594 - 11494.006: 97.7733% ( 15) 00:07:06.395 11494.006 - 11544.418: 97.8332% ( 11) 00:07:06.395 11544.418 - 11594.831: 97.8931% ( 11) 00:07:06.395 11594.831 - 11645.243: 97.9421% ( 9) 00:07:06.395 11645.243 - 11695.655: 98.0074% ( 12) 00:07:06.395 11695.655 - 11746.068: 98.0673% ( 11) 00:07:06.395 11746.068 - 11796.480: 98.1054% ( 7) 00:07:06.395 11796.480 - 11846.892: 98.1381% ( 6) 00:07:06.395 11846.892 - 11897.305: 98.1544% ( 3) 00:07:06.395 11897.305 - 11947.717: 98.1707% ( 3) 00:07:06.395 11947.717 - 11998.129: 98.1925% ( 4) 00:07:06.395 11998.129 - 12048.542: 98.2143% ( 4) 00:07:06.395 12048.542 - 12098.954: 98.2306% ( 3) 00:07:06.395 12098.954 - 12149.366: 98.2524% ( 4) 00:07:06.395 12149.366 - 12199.778: 98.2796% ( 5) 00:07:06.395 12199.778 - 12250.191: 98.3014% ( 4) 00:07:06.395 12250.191 - 12300.603: 98.3232% ( 4) 00:07:06.395 12300.603 - 12351.015: 98.3449% ( 4) 00:07:06.395 12351.015 - 12401.428: 98.3667% ( 4) 00:07:06.395 12401.428 - 12451.840: 98.3939% ( 5) 00:07:06.395 12451.840 - 12502.252: 98.4157% ( 4) 00:07:06.395 12502.252 - 12552.665: 98.4484% ( 6) 00:07:06.395 12552.665 - 12603.077: 98.4811% ( 6) 00:07:06.395 12603.077 - 12653.489: 98.5083% ( 5) 00:07:06.395 12653.489 - 12703.902: 98.5409% ( 6) 00:07:06.395 12703.902 - 12754.314: 98.5573% ( 3) 00:07:06.395 12754.314 - 12804.726: 98.5736% ( 3) 00:07:06.395 12804.726 - 12855.138: 98.5954% ( 4) 00:07:06.395 12855.138 - 12905.551: 98.6117% ( 3) 00:07:06.395 12905.551 - 13006.375: 98.6553% ( 8) 00:07:06.395 13006.375 - 13107.200: 98.6934% ( 7) 00:07:06.395 13107.200 - 13208.025: 98.7533% ( 11) 00:07:06.395 13208.025 - 13308.849: 98.7914% ( 7) 00:07:06.395 13308.849 - 13409.674: 98.8240% ( 6) 00:07:06.395 13409.674 - 13510.498: 98.8622% ( 7) 00:07:06.395 13510.498 - 13611.323: 98.9003% ( 7) 00:07:06.395 13611.323 - 13712.148: 98.9438% ( 8) 00:07:06.395 13712.148 - 13812.972: 98.9819% ( 7) 00:07:06.395 13812.972 - 13913.797: 99.0200% ( 7) 00:07:06.395 13913.797 - 14014.622: 99.0581% ( 7) 00:07:06.395 14014.622 - 14115.446: 99.0908% ( 6) 00:07:06.395 14115.446 - 14216.271: 99.1344% ( 8) 00:07:06.395 14216.271 - 14317.095: 99.1725% ( 7) 00:07:06.395 14317.095 - 14417.920: 99.2106% ( 7) 00:07:06.395 14417.920 - 14518.745: 99.2324% ( 4) 00:07:06.395 14518.745 - 14619.569: 99.2541% ( 4) 00:07:06.395 14619.569 - 14720.394: 99.2759% ( 4) 00:07:06.395 14720.394 - 14821.218: 99.2977% ( 4) 00:07:06.395 14821.218 - 14922.043: 99.3031% ( 1) 00:07:06.395 14922.043 - 15022.868: 99.3086% ( 1) 00:07:06.395 15022.868 - 15123.692: 99.3358% ( 5) 00:07:06.395 15123.692 - 15224.517: 99.3685% ( 6) 00:07:06.395 15224.517 - 15325.342: 99.4011% ( 6) 00:07:06.395 15325.342 - 15426.166: 99.4338% ( 6) 00:07:06.395 15426.166 - 15526.991: 99.4610% ( 5) 00:07:06.395 15526.991 - 15627.815: 99.4937% ( 6) 00:07:06.395 15627.815 - 15728.640: 99.5264% ( 6) 00:07:06.395 15728.640 - 15829.465: 99.5590% ( 6) 00:07:06.395 15829.465 - 15930.289: 99.5862% ( 5) 00:07:06.395 15930.289 - 16031.114: 99.6189% ( 6) 00:07:06.395 16031.114 - 16131.938: 99.6461% ( 5) 00:07:06.395 16131.938 - 16232.763: 99.6516% ( 1) 00:07:06.395 19156.677 - 19257.502: 99.6625% ( 2) 00:07:06.395 19257.502 - 19358.326: 99.6788% ( 3) 00:07:06.395 19358.326 - 19459.151: 99.6951% ( 3) 00:07:06.395 19459.151 - 19559.975: 99.7169% ( 4) 00:07:06.395 19559.975 - 19660.800: 99.7332% ( 3) 00:07:06.395 19660.800 - 19761.625: 99.7496% ( 3) 00:07:06.395 19761.625 - 19862.449: 99.7713% ( 4) 00:07:06.395 19862.449 - 19963.274: 99.7877% ( 3) 00:07:06.395 19963.274 - 20064.098: 99.8095% ( 4) 00:07:06.395 20064.098 - 20164.923: 99.8258% ( 3) 00:07:06.395 20164.923 - 20265.748: 99.8421% ( 3) 00:07:06.395 20265.748 - 20366.572: 99.8639% ( 4) 00:07:06.395 20366.572 - 20467.397: 99.8802% ( 3) 00:07:06.396 20467.397 - 20568.222: 99.8966% ( 3) 00:07:06.396 20568.222 - 20669.046: 99.9183% ( 4) 00:07:06.396 20669.046 - 20769.871: 99.9347% ( 3) 00:07:06.396 20769.871 - 20870.695: 99.9510% ( 3) 00:07:06.396 20870.695 - 20971.520: 99.9673% ( 3) 00:07:06.396 20971.520 - 21072.345: 99.9891% ( 4) 00:07:06.396 21072.345 - 21173.169: 100.0000% ( 2) 00:07:06.396 00:07:06.396 02:55:00 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:07.332 Initializing NVMe Controllers 00:07:07.332 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:07.332 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:07.332 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:07.332 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:07.332 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:07.332 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:07.332 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:07.332 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:07.332 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:07.332 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:07.332 Initialization complete. Launching workers. 00:07:07.332 ======================================================== 00:07:07.332 Latency(us) 00:07:07.332 Device Information : IOPS MiB/s Average min max 00:07:07.332 PCIE (0000:00:10.0) NSID 1 from core 0: 17516.71 205.27 7317.08 5920.49 33367.48 00:07:07.332 PCIE (0000:00:11.0) NSID 1 from core 0: 17516.71 205.27 7306.14 5941.87 32041.83 00:07:07.332 PCIE (0000:00:13.0) NSID 1 from core 0: 17516.71 205.27 7295.17 5892.32 30576.09 00:07:07.332 PCIE (0000:00:12.0) NSID 1 from core 0: 17516.71 205.27 7283.95 5902.46 28855.21 00:07:07.332 PCIE (0000:00:12.0) NSID 2 from core 0: 17516.71 205.27 7272.91 6041.34 26967.88 00:07:07.332 PCIE (0000:00:12.0) NSID 3 from core 0: 17516.71 205.27 7261.73 5944.89 25161.09 00:07:07.332 ======================================================== 00:07:07.332 Total : 105100.28 1231.64 7289.50 5892.32 33367.48 00:07:07.332 00:07:07.332 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:07.332 ================================================================================= 00:07:07.332 1.00000% : 6175.508us 00:07:07.332 10.00000% : 6503.188us 00:07:07.332 25.00000% : 6654.425us 00:07:07.332 50.00000% : 6956.898us 00:07:07.332 75.00000% : 7410.609us 00:07:07.332 90.00000% : 8217.206us 00:07:07.332 95.00000% : 8973.391us 00:07:07.332 98.00000% : 10233.698us 00:07:07.332 99.00000% : 11393.182us 00:07:07.332 99.50000% : 24702.031us 00:07:07.332 99.90000% : 32868.825us 00:07:07.332 99.99000% : 33473.772us 00:07:07.332 99.99900% : 33473.772us 00:07:07.332 99.99990% : 33473.772us 00:07:07.332 99.99999% : 33473.772us 00:07:07.332 00:07:07.332 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:07.332 ================================================================================= 00:07:07.332 1.00000% : 6326.745us 00:07:07.332 10.00000% : 6604.012us 00:07:07.332 25.00000% : 6755.249us 00:07:07.332 50.00000% : 6906.486us 00:07:07.332 75.00000% : 7360.197us 00:07:07.332 90.00000% : 8217.206us 00:07:07.332 95.00000% : 8822.154us 00:07:07.332 98.00000% : 10183.286us 00:07:07.332 99.00000% : 10889.058us 00:07:07.332 99.50000% : 23290.486us 00:07:07.332 99.90000% : 31658.929us 00:07:07.332 99.99000% : 32062.228us 00:07:07.332 99.99900% : 32062.228us 00:07:07.332 99.99990% : 32062.228us 00:07:07.332 99.99999% : 32062.228us 00:07:07.332 00:07:07.332 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:07.332 ================================================================================= 00:07:07.332 1.00000% : 6326.745us 00:07:07.332 10.00000% : 6604.012us 00:07:07.332 25.00000% : 6755.249us 00:07:07.332 50.00000% : 6906.486us 00:07:07.332 75.00000% : 7410.609us 00:07:07.332 90.00000% : 8166.794us 00:07:07.332 95.00000% : 8973.391us 00:07:07.332 98.00000% : 10284.111us 00:07:07.332 99.00000% : 10788.234us 00:07:07.332 99.50000% : 22383.065us 00:07:07.332 99.90000% : 30247.385us 00:07:07.332 99.99000% : 30650.683us 00:07:07.332 99.99900% : 30650.683us 00:07:07.332 99.99990% : 30650.683us 00:07:07.332 99.99999% : 30650.683us 00:07:07.332 00:07:07.332 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:07.332 ================================================================================= 00:07:07.332 1.00000% : 6326.745us 00:07:07.332 10.00000% : 6604.012us 00:07:07.332 25.00000% : 6755.249us 00:07:07.332 50.00000% : 6906.486us 00:07:07.332 75.00000% : 7360.197us 00:07:07.332 90.00000% : 8116.382us 00:07:07.332 95.00000% : 8973.391us 00:07:07.332 98.00000% : 10082.462us 00:07:07.332 99.00000% : 10737.822us 00:07:07.332 99.50000% : 21173.169us 00:07:07.332 99.90000% : 28432.542us 00:07:07.332 99.99000% : 28835.840us 00:07:07.332 99.99900% : 29037.489us 00:07:07.332 99.99990% : 29037.489us 00:07:07.332 99.99999% : 29037.489us 00:07:07.332 00:07:07.332 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:07.332 ================================================================================= 00:07:07.332 1.00000% : 6326.745us 00:07:07.332 10.00000% : 6604.012us 00:07:07.332 25.00000% : 6755.249us 00:07:07.332 50.00000% : 6956.898us 00:07:07.332 75.00000% : 7360.197us 00:07:07.332 90.00000% : 8166.794us 00:07:07.332 95.00000% : 9023.803us 00:07:07.332 98.00000% : 9931.225us 00:07:07.332 99.00000% : 10737.822us 00:07:07.332 99.50000% : 20568.222us 00:07:07.332 99.90000% : 26617.698us 00:07:07.332 99.99000% : 27020.997us 00:07:07.332 99.99900% : 27020.997us 00:07:07.332 99.99990% : 27020.997us 00:07:07.332 99.99999% : 27020.997us 00:07:07.332 00:07:07.332 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:07.332 ================================================================================= 00:07:07.332 1.00000% : 6301.538us 00:07:07.332 10.00000% : 6604.012us 00:07:07.332 25.00000% : 6755.249us 00:07:07.332 50.00000% : 6956.898us 00:07:07.332 75.00000% : 7360.197us 00:07:07.332 90.00000% : 8166.794us 00:07:07.332 95.00000% : 8973.391us 00:07:07.332 98.00000% : 9779.988us 00:07:07.332 99.00000% : 11090.708us 00:07:07.332 99.50000% : 19761.625us 00:07:07.332 99.90000% : 24802.855us 00:07:07.332 99.99000% : 25206.154us 00:07:07.332 99.99900% : 25206.154us 00:07:07.332 99.99990% : 25206.154us 00:07:07.332 99.99999% : 25206.154us 00:07:07.332 00:07:07.332 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:07.332 ============================================================================== 00:07:07.332 Range in us Cumulative IO count 00:07:07.332 5898.240 - 5923.446: 0.0057% ( 1) 00:07:07.332 5973.858 - 5999.065: 0.0114% ( 1) 00:07:07.332 5999.065 - 6024.271: 0.1369% ( 22) 00:07:07.332 6024.271 - 6049.477: 0.2566% ( 21) 00:07:07.332 6049.477 - 6074.683: 0.3422% ( 15) 00:07:07.332 6074.683 - 6099.889: 0.5075% ( 29) 00:07:07.332 6099.889 - 6125.095: 0.6957% ( 33) 00:07:07.332 6125.095 - 6150.302: 0.9295% ( 41) 00:07:07.332 6150.302 - 6175.508: 1.0949% ( 29) 00:07:07.332 6175.508 - 6200.714: 1.2375% ( 25) 00:07:07.332 6200.714 - 6225.920: 1.4827% ( 43) 00:07:07.332 6225.920 - 6251.126: 1.7678% ( 50) 00:07:07.332 6251.126 - 6276.332: 2.0871% ( 56) 00:07:07.332 6276.332 - 6301.538: 2.4179% ( 58) 00:07:07.332 6301.538 - 6326.745: 2.9197% ( 88) 00:07:07.332 6326.745 - 6351.951: 3.6382% ( 126) 00:07:07.332 6351.951 - 6377.157: 4.4879% ( 149) 00:07:07.332 6377.157 - 6402.363: 5.5030% ( 178) 00:07:07.332 6402.363 - 6427.569: 6.9172% ( 248) 00:07:07.332 6427.569 - 6452.775: 8.5139% ( 280) 00:07:07.332 6452.775 - 6503.188: 11.9240% ( 598) 00:07:07.332 6503.188 - 6553.600: 16.0527% ( 724) 00:07:07.332 6553.600 - 6604.012: 20.4380% ( 769) 00:07:07.332 6604.012 - 6654.425: 25.2737% ( 848) 00:07:07.332 6654.425 - 6704.837: 30.2920% ( 880) 00:07:07.332 6704.837 - 6755.249: 35.0308% ( 831) 00:07:07.332 6755.249 - 6805.662: 39.4446% ( 774) 00:07:07.332 6805.662 - 6856.074: 43.8526% ( 773) 00:07:07.332 6856.074 - 6906.486: 47.8216% ( 696) 00:07:07.332 6906.486 - 6956.898: 51.5340% ( 651) 00:07:07.332 6956.898 - 7007.311: 54.8529% ( 582) 00:07:07.332 7007.311 - 7057.723: 57.7555% ( 509) 00:07:07.332 7057.723 - 7108.135: 60.9489% ( 560) 00:07:07.332 7108.135 - 7158.548: 63.9370% ( 524) 00:07:07.332 7158.548 - 7208.960: 66.6172% ( 470) 00:07:07.332 7208.960 - 7259.372: 69.0922% ( 434) 00:07:07.332 7259.372 - 7309.785: 71.3618% ( 398) 00:07:07.332 7309.785 - 7360.197: 73.2835% ( 337) 00:07:07.332 7360.197 - 7410.609: 75.0342% ( 307) 00:07:07.332 7410.609 - 7461.022: 76.5625% ( 268) 00:07:07.332 7461.022 - 7511.434: 78.1478% ( 278) 00:07:07.332 7511.434 - 7561.846: 79.4765% ( 233) 00:07:07.332 7561.846 - 7612.258: 80.6341% ( 203) 00:07:07.332 7612.258 - 7662.671: 81.8773% ( 218) 00:07:07.332 7662.671 - 7713.083: 82.8809% ( 176) 00:07:07.332 7713.083 - 7763.495: 83.8561% ( 171) 00:07:07.332 7763.495 - 7813.908: 84.9224% ( 187) 00:07:07.332 7813.908 - 7864.320: 85.7265% ( 141) 00:07:07.332 7864.320 - 7914.732: 86.4564% ( 128) 00:07:07.332 7914.732 - 7965.145: 87.0153% ( 98) 00:07:07.332 7965.145 - 8015.557: 87.6882% ( 118) 00:07:07.332 8015.557 - 8065.969: 88.4352% ( 131) 00:07:07.332 8065.969 - 8116.382: 89.0397% ( 106) 00:07:07.332 8116.382 - 8166.794: 89.5814% ( 95) 00:07:07.332 8166.794 - 8217.206: 90.0604% ( 84) 00:07:07.332 8217.206 - 8267.618: 90.6250% ( 99) 00:07:07.332 8267.618 - 8318.031: 91.1781% ( 97) 00:07:07.332 8318.031 - 8368.443: 91.6058% ( 75) 00:07:07.332 8368.443 - 8418.855: 92.0164% ( 72) 00:07:07.332 8418.855 - 8469.268: 92.3700% ( 62) 00:07:07.332 8469.268 - 8519.680: 92.7349% ( 64) 00:07:07.332 8519.680 - 8570.092: 93.0600% ( 57) 00:07:07.332 8570.092 - 8620.505: 93.4078% ( 61) 00:07:07.332 8620.505 - 8670.917: 93.6645% ( 45) 00:07:07.332 8670.917 - 8721.329: 93.8412% ( 31) 00:07:07.332 8721.329 - 8771.742: 94.0180% ( 31) 00:07:07.332 8771.742 - 8822.154: 94.2290% ( 37) 00:07:07.332 8822.154 - 8872.566: 94.6225% ( 69) 00:07:07.332 8872.566 - 8922.978: 94.8335% ( 37) 00:07:07.332 8922.978 - 8973.391: 95.0445% ( 37) 00:07:07.332 8973.391 - 9023.803: 95.3752% ( 58) 00:07:07.332 9023.803 - 9074.215: 95.6204% ( 43) 00:07:07.332 9074.215 - 9124.628: 95.8485% ( 40) 00:07:07.332 9124.628 - 9175.040: 96.0082% ( 28) 00:07:07.332 9175.040 - 9225.452: 96.1793% ( 30) 00:07:07.332 9225.452 - 9275.865: 96.3333% ( 27) 00:07:07.332 9275.865 - 9326.277: 96.4473% ( 20) 00:07:07.332 9326.277 - 9376.689: 96.5385% ( 16) 00:07:07.332 9376.689 - 9427.102: 96.6412% ( 18) 00:07:07.332 9427.102 - 9477.514: 96.7609% ( 21) 00:07:07.332 9477.514 - 9527.926: 96.8294% ( 12) 00:07:07.332 9527.926 - 9578.338: 96.9320% ( 18) 00:07:07.332 9578.338 - 9628.751: 97.0347% ( 18) 00:07:07.332 9628.751 - 9679.163: 97.1829% ( 26) 00:07:07.332 9679.163 - 9729.575: 97.3027% ( 21) 00:07:07.332 9729.575 - 9779.988: 97.3654% ( 11) 00:07:07.332 9779.988 - 9830.400: 97.4624% ( 17) 00:07:07.332 9830.400 - 9880.812: 97.5365% ( 13) 00:07:07.332 9880.812 - 9931.225: 97.5935% ( 10) 00:07:07.332 9931.225 - 9981.637: 97.6448% ( 9) 00:07:07.332 9981.637 - 10032.049: 97.7076% ( 11) 00:07:07.332 10032.049 - 10082.462: 97.8216% ( 20) 00:07:07.332 10082.462 - 10132.874: 97.9186% ( 17) 00:07:07.332 10132.874 - 10183.286: 97.9642% ( 8) 00:07:07.332 10183.286 - 10233.698: 98.0440% ( 14) 00:07:07.332 10233.698 - 10284.111: 98.0839% ( 7) 00:07:07.332 10284.111 - 10334.523: 98.1410% ( 10) 00:07:07.332 10334.523 - 10384.935: 98.1866% ( 8) 00:07:07.332 10384.935 - 10435.348: 98.2322% ( 8) 00:07:07.332 10435.348 - 10485.760: 98.2721% ( 7) 00:07:07.332 10485.760 - 10536.172: 98.3063% ( 6) 00:07:07.332 10536.172 - 10586.585: 98.3577% ( 9) 00:07:07.332 10586.585 - 10636.997: 98.3862% ( 5) 00:07:07.332 10636.997 - 10687.409: 98.4147% ( 5) 00:07:07.332 10687.409 - 10737.822: 98.4489% ( 6) 00:07:07.332 10737.822 - 10788.234: 98.5002% ( 9) 00:07:07.332 10788.234 - 10838.646: 98.5516% ( 9) 00:07:07.332 10838.646 - 10889.058: 98.6029% ( 9) 00:07:07.332 10889.058 - 10939.471: 98.6371% ( 6) 00:07:07.332 10939.471 - 10989.883: 98.6884% ( 9) 00:07:07.332 10989.883 - 11040.295: 98.7340% ( 8) 00:07:07.332 11040.295 - 11090.708: 98.7797% ( 8) 00:07:07.332 11090.708 - 11141.120: 98.8310% ( 9) 00:07:07.332 11141.120 - 11191.532: 98.8652% ( 6) 00:07:07.332 11191.532 - 11241.945: 98.9051% ( 7) 00:07:07.332 11241.945 - 11292.357: 98.9279% ( 4) 00:07:07.332 11292.357 - 11342.769: 98.9849% ( 10) 00:07:07.332 11342.769 - 11393.182: 99.0135% ( 5) 00:07:07.332 11393.182 - 11443.594: 99.0363% ( 4) 00:07:07.332 11443.594 - 11494.006: 99.0762% ( 7) 00:07:07.332 11494.006 - 11544.418: 99.1275% ( 9) 00:07:07.332 11544.418 - 11594.831: 99.2073% ( 14) 00:07:07.332 11594.831 - 11645.243: 99.2302% ( 4) 00:07:07.332 11645.243 - 11695.655: 99.2416% ( 2) 00:07:07.332 11695.655 - 11746.068: 99.2530% ( 2) 00:07:07.332 11998.129 - 12048.542: 99.2587% ( 1) 00:07:07.332 12098.954 - 12149.366: 99.2644% ( 1) 00:07:07.332 12199.778 - 12250.191: 99.2701% ( 1) 00:07:07.332 23189.662 - 23290.486: 99.2929% ( 4) 00:07:07.332 23290.486 - 23391.311: 99.3784% ( 15) 00:07:07.332 23391.311 - 23492.135: 99.3955% ( 3) 00:07:07.332 23492.135 - 23592.960: 99.4012% ( 1) 00:07:07.332 23592.960 - 23693.785: 99.4183% ( 3) 00:07:07.332 23693.785 - 23794.609: 99.4297% ( 2) 00:07:07.332 23794.609 - 23895.434: 99.4411% ( 2) 00:07:07.332 23895.434 - 23996.258: 99.4469% ( 1) 00:07:07.332 23996.258 - 24097.083: 99.4526% ( 1) 00:07:07.332 24097.083 - 24197.908: 99.4583% ( 1) 00:07:07.332 24197.908 - 24298.732: 99.4697% ( 2) 00:07:07.332 24298.732 - 24399.557: 99.4754% ( 1) 00:07:07.332 24399.557 - 24500.382: 99.4811% ( 1) 00:07:07.332 24500.382 - 24601.206: 99.4982% ( 3) 00:07:07.332 24601.206 - 24702.031: 99.5039% ( 1) 00:07:07.332 24702.031 - 24802.855: 99.5210% ( 3) 00:07:07.332 24802.855 - 24903.680: 99.5438% ( 4) 00:07:07.332 24903.680 - 25004.505: 99.5609% ( 3) 00:07:07.332 25004.505 - 25105.329: 99.5837% ( 4) 00:07:07.332 25105.329 - 25206.154: 99.6065% ( 4) 00:07:07.332 25206.154 - 25306.978: 99.6236% ( 3) 00:07:07.332 25306.978 - 25407.803: 99.6350% ( 2) 00:07:07.332 31053.982 - 31255.631: 99.6635% ( 5) 00:07:07.332 31255.631 - 31457.280: 99.7035% ( 7) 00:07:07.332 31457.280 - 31658.929: 99.7320% ( 5) 00:07:07.332 31658.929 - 31860.578: 99.7548% ( 4) 00:07:07.332 31860.578 - 32062.228: 99.7776% ( 4) 00:07:07.332 32062.228 - 32263.877: 99.8004% ( 4) 00:07:07.332 32263.877 - 32465.526: 99.8289% ( 5) 00:07:07.332 32465.526 - 32667.175: 99.8631% ( 6) 00:07:07.332 32667.175 - 32868.825: 99.9088% ( 8) 00:07:07.332 32868.825 - 33070.474: 99.9430% ( 6) 00:07:07.332 33070.474 - 33272.123: 99.9829% ( 7) 00:07:07.332 33272.123 - 33473.772: 100.0000% ( 3) 00:07:07.332 00:07:07.332 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:07.332 ============================================================================== 00:07:07.332 Range in us Cumulative IO count 00:07:07.332 5923.446 - 5948.652: 0.0057% ( 1) 00:07:07.332 5999.065 - 6024.271: 0.0114% ( 1) 00:07:07.332 6024.271 - 6049.477: 0.0285% ( 3) 00:07:07.332 6049.477 - 6074.683: 0.0456% ( 3) 00:07:07.332 6074.683 - 6099.889: 0.0855% ( 7) 00:07:07.332 6099.889 - 6125.095: 0.1312% ( 8) 00:07:07.332 6125.095 - 6150.302: 0.1882% ( 10) 00:07:07.332 6150.302 - 6175.508: 0.2623% ( 13) 00:07:07.332 6175.508 - 6200.714: 0.3479% ( 15) 00:07:07.332 6200.714 - 6225.920: 0.4790% ( 23) 00:07:07.332 6225.920 - 6251.126: 0.5931% ( 20) 00:07:07.332 6251.126 - 6276.332: 0.7242% ( 23) 00:07:07.332 6276.332 - 6301.538: 0.9295% ( 36) 00:07:07.332 6301.538 - 6326.745: 1.2546% ( 57) 00:07:07.332 6326.745 - 6351.951: 1.5967% ( 60) 00:07:07.332 6351.951 - 6377.157: 1.8647% ( 47) 00:07:07.332 6377.157 - 6402.363: 2.1841% ( 56) 00:07:07.332 6402.363 - 6427.569: 2.7258% ( 95) 00:07:07.332 6427.569 - 6452.775: 3.3075% ( 102) 00:07:07.332 6452.775 - 6503.188: 5.3490% ( 358) 00:07:07.332 6503.188 - 6553.600: 7.9094% ( 449) 00:07:07.332 6553.600 - 6604.012: 12.2035% ( 753) 00:07:07.332 6604.012 - 6654.425: 17.0449% ( 849) 00:07:07.332 6654.425 - 6704.837: 24.0591% ( 1230) 00:07:07.332 6704.837 - 6755.249: 30.7995% ( 1182) 00:07:07.332 6755.249 - 6805.662: 38.9542% ( 1430) 00:07:07.332 6805.662 - 6856.074: 44.8278% ( 1030) 00:07:07.332 6856.074 - 6906.486: 50.7527% ( 1039) 00:07:07.332 6906.486 - 6956.898: 55.9991% ( 920) 00:07:07.332 6956.898 - 7007.311: 60.4414% ( 779) 00:07:07.332 7007.311 - 7057.723: 63.5664% ( 548) 00:07:07.332 7057.723 - 7108.135: 66.0128% ( 429) 00:07:07.332 7108.135 - 7158.548: 68.7386% ( 478) 00:07:07.332 7158.548 - 7208.960: 70.9740% ( 392) 00:07:07.332 7208.960 - 7259.372: 72.6562% ( 295) 00:07:07.332 7259.372 - 7309.785: 73.6656% ( 177) 00:07:07.332 7309.785 - 7360.197: 75.2053% ( 270) 00:07:07.332 7360.197 - 7410.609: 76.5625% ( 238) 00:07:07.332 7410.609 - 7461.022: 77.5719% ( 177) 00:07:07.332 7461.022 - 7511.434: 78.4957% ( 162) 00:07:07.332 7511.434 - 7561.846: 79.7502% ( 220) 00:07:07.332 7561.846 - 7612.258: 80.8907% ( 200) 00:07:07.332 7612.258 - 7662.671: 81.7290% ( 147) 00:07:07.332 7662.671 - 7713.083: 82.9665% ( 217) 00:07:07.332 7713.083 - 7763.495: 83.9245% ( 168) 00:07:07.332 7763.495 - 7813.908: 84.8141% ( 156) 00:07:07.332 7813.908 - 7864.320: 85.8234% ( 177) 00:07:07.332 7864.320 - 7914.732: 86.6104% ( 138) 00:07:07.332 7914.732 - 7965.145: 87.3061% ( 122) 00:07:07.333 7965.145 - 8015.557: 88.2242% ( 161) 00:07:07.333 8015.557 - 8065.969: 88.8059% ( 102) 00:07:07.333 8065.969 - 8116.382: 89.3020% ( 87) 00:07:07.333 8116.382 - 8166.794: 89.7582% ( 80) 00:07:07.333 8166.794 - 8217.206: 90.2315% ( 83) 00:07:07.333 8217.206 - 8267.618: 90.7733% ( 95) 00:07:07.333 8267.618 - 8318.031: 91.2523% ( 84) 00:07:07.333 8318.031 - 8368.443: 91.7028% ( 79) 00:07:07.333 8368.443 - 8418.855: 92.1191% ( 73) 00:07:07.333 8418.855 - 8469.268: 92.6722% ( 97) 00:07:07.333 8469.268 - 8519.680: 93.0201% ( 61) 00:07:07.333 8519.680 - 8570.092: 93.4307% ( 72) 00:07:07.333 8570.092 - 8620.505: 93.7500% ( 56) 00:07:07.333 8620.505 - 8670.917: 94.1036% ( 62) 00:07:07.333 8670.917 - 8721.329: 94.4058% ( 53) 00:07:07.333 8721.329 - 8771.742: 94.7479% ( 60) 00:07:07.333 8771.742 - 8822.154: 95.0673% ( 56) 00:07:07.333 8822.154 - 8872.566: 95.2897% ( 39) 00:07:07.333 8872.566 - 8922.978: 95.5349% ( 43) 00:07:07.333 8922.978 - 8973.391: 95.7117% ( 31) 00:07:07.333 8973.391 - 9023.803: 95.8828% ( 30) 00:07:07.333 9023.803 - 9074.215: 96.0310% ( 26) 00:07:07.333 9074.215 - 9124.628: 96.1850% ( 27) 00:07:07.333 9124.628 - 9175.040: 96.3104% ( 22) 00:07:07.333 9175.040 - 9225.452: 96.4074% ( 17) 00:07:07.333 9225.452 - 9275.865: 96.4986% ( 16) 00:07:07.333 9275.865 - 9326.277: 96.6070% ( 19) 00:07:07.333 9326.277 - 9376.689: 96.6526% ( 8) 00:07:07.333 9376.689 - 9427.102: 96.6925% ( 7) 00:07:07.333 9427.102 - 9477.514: 96.7381% ( 8) 00:07:07.333 9477.514 - 9527.926: 96.8009% ( 11) 00:07:07.333 9527.926 - 9578.338: 96.8921% ( 16) 00:07:07.333 9578.338 - 9628.751: 96.9605% ( 12) 00:07:07.333 9628.751 - 9679.163: 97.0290% ( 12) 00:07:07.333 9679.163 - 9729.575: 97.1145% ( 15) 00:07:07.333 9729.575 - 9779.988: 97.1829% ( 12) 00:07:07.333 9779.988 - 9830.400: 97.2571% ( 13) 00:07:07.333 9830.400 - 9880.812: 97.3198% ( 11) 00:07:07.333 9880.812 - 9931.225: 97.3825% ( 11) 00:07:07.333 9931.225 - 9981.637: 97.4795% ( 17) 00:07:07.333 9981.637 - 10032.049: 97.5707% ( 16) 00:07:07.333 10032.049 - 10082.462: 97.7076% ( 24) 00:07:07.333 10082.462 - 10132.874: 97.9186% ( 37) 00:07:07.333 10132.874 - 10183.286: 98.0668% ( 26) 00:07:07.333 10183.286 - 10233.698: 98.1239% ( 10) 00:07:07.333 10233.698 - 10284.111: 98.1638% ( 7) 00:07:07.333 10284.111 - 10334.523: 98.2208% ( 10) 00:07:07.333 10334.523 - 10384.935: 98.2778% ( 10) 00:07:07.333 10384.935 - 10435.348: 98.3634% ( 15) 00:07:07.333 10435.348 - 10485.760: 98.4831% ( 21) 00:07:07.333 10485.760 - 10536.172: 98.6314% ( 26) 00:07:07.333 10536.172 - 10586.585: 98.7055% ( 13) 00:07:07.333 10586.585 - 10636.997: 98.7625% ( 10) 00:07:07.333 10636.997 - 10687.409: 98.8196% ( 10) 00:07:07.333 10687.409 - 10737.822: 98.8709% ( 9) 00:07:07.333 10737.822 - 10788.234: 98.9222% ( 9) 00:07:07.333 10788.234 - 10838.646: 98.9792% ( 10) 00:07:07.333 10838.646 - 10889.058: 99.0135% ( 6) 00:07:07.333 10889.058 - 10939.471: 99.0534% ( 7) 00:07:07.333 10939.471 - 10989.883: 99.0762% ( 4) 00:07:07.333 10989.883 - 11040.295: 99.1047% ( 5) 00:07:07.333 11040.295 - 11090.708: 99.1275% ( 4) 00:07:07.333 11090.708 - 11141.120: 99.1446% ( 3) 00:07:07.333 11141.120 - 11191.532: 99.1560% ( 2) 00:07:07.333 11191.532 - 11241.945: 99.1674% ( 2) 00:07:07.333 11241.945 - 11292.357: 99.1731% ( 1) 00:07:07.333 11292.357 - 11342.769: 99.1845% ( 2) 00:07:07.333 11342.769 - 11393.182: 99.1959% ( 2) 00:07:07.333 11393.182 - 11443.594: 99.2016% ( 1) 00:07:07.333 11443.594 - 11494.006: 99.2188% ( 3) 00:07:07.333 11494.006 - 11544.418: 99.2302% ( 2) 00:07:07.333 11544.418 - 11594.831: 99.2416% ( 2) 00:07:07.333 11594.831 - 11645.243: 99.2530% ( 2) 00:07:07.333 11645.243 - 11695.655: 99.2644% ( 2) 00:07:07.333 11695.655 - 11746.068: 99.2701% ( 1) 00:07:07.333 21979.766 - 22080.591: 99.2758% ( 1) 00:07:07.333 22080.591 - 22181.415: 99.2986% ( 4) 00:07:07.333 22181.415 - 22282.240: 99.3157% ( 3) 00:07:07.333 22282.240 - 22383.065: 99.3385% ( 4) 00:07:07.333 22383.065 - 22483.889: 99.3613% ( 4) 00:07:07.333 22483.889 - 22584.714: 99.3784% ( 3) 00:07:07.333 22584.714 - 22685.538: 99.3955% ( 3) 00:07:07.333 22685.538 - 22786.363: 99.4126% ( 3) 00:07:07.333 22786.363 - 22887.188: 99.4297% ( 3) 00:07:07.333 22887.188 - 22988.012: 99.4526% ( 4) 00:07:07.333 22988.012 - 23088.837: 99.4697% ( 3) 00:07:07.333 23088.837 - 23189.662: 99.4925% ( 4) 00:07:07.333 23189.662 - 23290.486: 99.5096% ( 3) 00:07:07.333 23290.486 - 23391.311: 99.5324% ( 4) 00:07:07.333 23391.311 - 23492.135: 99.5495% ( 3) 00:07:07.333 23492.135 - 23592.960: 99.5723% ( 4) 00:07:07.333 23592.960 - 23693.785: 99.5894% ( 3) 00:07:07.333 23693.785 - 23794.609: 99.6122% ( 4) 00:07:07.333 23794.609 - 23895.434: 99.6350% ( 4) 00:07:07.333 29844.086 - 30045.735: 99.6464% ( 2) 00:07:07.333 30045.735 - 30247.385: 99.6921% ( 8) 00:07:07.333 30247.385 - 30449.034: 99.7263% ( 6) 00:07:07.333 30449.034 - 30650.683: 99.7605% ( 6) 00:07:07.333 30650.683 - 30852.332: 99.7890% ( 5) 00:07:07.333 30852.332 - 31053.982: 99.8118% ( 4) 00:07:07.333 31053.982 - 31255.631: 99.8403% ( 5) 00:07:07.333 31255.631 - 31457.280: 99.8688% ( 5) 00:07:07.333 31457.280 - 31658.929: 99.9145% ( 8) 00:07:07.333 31658.929 - 31860.578: 99.9544% ( 7) 00:07:07.333 31860.578 - 32062.228: 100.0000% ( 8) 00:07:07.333 00:07:07.333 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:07.333 ============================================================================== 00:07:07.333 Range in us Cumulative IO count 00:07:07.333 5873.034 - 5898.240: 0.0057% ( 1) 00:07:07.333 5999.065 - 6024.271: 0.0114% ( 1) 00:07:07.333 6074.683 - 6099.889: 0.0285% ( 3) 00:07:07.333 6099.889 - 6125.095: 0.0570% ( 5) 00:07:07.333 6125.095 - 6150.302: 0.1026% ( 8) 00:07:07.333 6150.302 - 6175.508: 0.1711% ( 12) 00:07:07.333 6175.508 - 6200.714: 0.2737% ( 18) 00:07:07.333 6200.714 - 6225.920: 0.3536% ( 14) 00:07:07.333 6225.920 - 6251.126: 0.4391% ( 15) 00:07:07.333 6251.126 - 6276.332: 0.5817% ( 25) 00:07:07.333 6276.332 - 6301.538: 0.7698% ( 33) 00:07:07.333 6301.538 - 6326.745: 1.1690% ( 70) 00:07:07.333 6326.745 - 6351.951: 1.5682% ( 70) 00:07:07.333 6351.951 - 6377.157: 1.8362% ( 47) 00:07:07.333 6377.157 - 6402.363: 2.3552% ( 91) 00:07:07.333 6402.363 - 6427.569: 2.8798% ( 92) 00:07:07.333 6427.569 - 6452.775: 3.3588% ( 84) 00:07:07.333 6452.775 - 6503.188: 5.0011% ( 288) 00:07:07.333 6503.188 - 6553.600: 7.7327% ( 479) 00:07:07.333 6553.600 - 6604.012: 11.6674% ( 690) 00:07:07.333 6604.012 - 6654.425: 17.0278% ( 940) 00:07:07.333 6654.425 - 6704.837: 23.3063% ( 1101) 00:07:07.333 6704.837 - 6755.249: 30.9078% ( 1333) 00:07:07.333 6755.249 - 6805.662: 38.0417% ( 1251) 00:07:07.333 6805.662 - 6856.074: 45.5805% ( 1322) 00:07:07.333 6856.074 - 6906.486: 51.6024% ( 1056) 00:07:07.333 6906.486 - 6956.898: 56.8659% ( 923) 00:07:07.333 6956.898 - 7007.311: 61.0173% ( 728) 00:07:07.333 7007.311 - 7057.723: 63.8572% ( 498) 00:07:07.333 7057.723 - 7108.135: 66.0128% ( 378) 00:07:07.333 7108.135 - 7158.548: 68.1854% ( 381) 00:07:07.333 7158.548 - 7208.960: 70.7516% ( 450) 00:07:07.333 7208.960 - 7259.372: 72.8844% ( 374) 00:07:07.333 7259.372 - 7309.785: 74.0249% ( 200) 00:07:07.333 7309.785 - 7360.197: 74.9943% ( 170) 00:07:07.333 7360.197 - 7410.609: 76.1006% ( 194) 00:07:07.333 7410.609 - 7461.022: 77.2696% ( 205) 00:07:07.333 7461.022 - 7511.434: 78.8549% ( 278) 00:07:07.333 7511.434 - 7561.846: 79.8529% ( 175) 00:07:07.333 7561.846 - 7612.258: 80.9592% ( 194) 00:07:07.333 7612.258 - 7662.671: 82.1738% ( 213) 00:07:07.333 7662.671 - 7713.083: 83.5995% ( 250) 00:07:07.333 7713.083 - 7763.495: 84.8312% ( 216) 00:07:07.333 7763.495 - 7813.908: 85.5839% ( 132) 00:07:07.333 7813.908 - 7864.320: 86.3082% ( 127) 00:07:07.333 7864.320 - 7914.732: 86.9411% ( 111) 00:07:07.333 7914.732 - 7965.145: 87.8136% ( 153) 00:07:07.333 7965.145 - 8015.557: 88.6006% ( 138) 00:07:07.333 8015.557 - 8065.969: 89.1708% ( 100) 00:07:07.333 8065.969 - 8116.382: 89.9578% ( 138) 00:07:07.333 8116.382 - 8166.794: 90.3342% ( 66) 00:07:07.333 8166.794 - 8217.206: 90.7961% ( 81) 00:07:07.333 8217.206 - 8267.618: 91.2466% ( 79) 00:07:07.333 8267.618 - 8318.031: 91.7028% ( 80) 00:07:07.333 8318.031 - 8368.443: 92.0506% ( 61) 00:07:07.333 8368.443 - 8418.855: 92.3928% ( 60) 00:07:07.333 8418.855 - 8469.268: 92.6551% ( 46) 00:07:07.333 8469.268 - 8519.680: 92.8661% ( 37) 00:07:07.333 8519.680 - 8570.092: 93.1113% ( 43) 00:07:07.333 8570.092 - 8620.505: 93.3907% ( 49) 00:07:07.333 8620.505 - 8670.917: 93.6245% ( 41) 00:07:07.333 8670.917 - 8721.329: 93.8355% ( 37) 00:07:07.333 8721.329 - 8771.742: 94.2119% ( 66) 00:07:07.333 8771.742 - 8822.154: 94.5255% ( 55) 00:07:07.333 8822.154 - 8872.566: 94.7137% ( 33) 00:07:07.333 8872.566 - 8922.978: 94.9361% ( 39) 00:07:07.333 8922.978 - 8973.391: 95.3524% ( 73) 00:07:07.333 8973.391 - 9023.803: 95.5520% ( 35) 00:07:07.333 9023.803 - 9074.215: 95.8086% ( 45) 00:07:07.333 9074.215 - 9124.628: 95.9512% ( 25) 00:07:07.333 9124.628 - 9175.040: 96.0481% ( 17) 00:07:07.333 9175.040 - 9225.452: 96.2819% ( 41) 00:07:07.333 9225.452 - 9275.865: 96.4302% ( 26) 00:07:07.333 9275.865 - 9326.277: 96.5271% ( 17) 00:07:07.333 9326.277 - 9376.689: 96.6070% ( 14) 00:07:07.333 9376.689 - 9427.102: 96.6868% ( 14) 00:07:07.333 9427.102 - 9477.514: 96.7552% ( 12) 00:07:07.333 9477.514 - 9527.926: 96.8180% ( 11) 00:07:07.333 9527.926 - 9578.338: 96.9092% ( 16) 00:07:07.333 9578.338 - 9628.751: 96.9776% ( 12) 00:07:07.333 9628.751 - 9679.163: 97.0860% ( 19) 00:07:07.333 9679.163 - 9729.575: 97.2115% ( 22) 00:07:07.333 9729.575 - 9779.988: 97.3198% ( 19) 00:07:07.333 9779.988 - 9830.400: 97.3825% ( 11) 00:07:07.333 9830.400 - 9880.812: 97.4510% ( 12) 00:07:07.333 9880.812 - 9931.225: 97.4966% ( 8) 00:07:07.333 9931.225 - 9981.637: 97.5935% ( 17) 00:07:07.333 9981.637 - 10032.049: 97.6905% ( 17) 00:07:07.333 10032.049 - 10082.462: 97.8045% ( 20) 00:07:07.333 10082.462 - 10132.874: 97.8444% ( 7) 00:07:07.333 10132.874 - 10183.286: 97.9015% ( 10) 00:07:07.333 10183.286 - 10233.698: 97.9984% ( 17) 00:07:07.333 10233.698 - 10284.111: 98.0839% ( 15) 00:07:07.333 10284.111 - 10334.523: 98.2778% ( 34) 00:07:07.333 10334.523 - 10384.935: 98.4375% ( 28) 00:07:07.333 10384.935 - 10435.348: 98.5287% ( 16) 00:07:07.333 10435.348 - 10485.760: 98.6200% ( 16) 00:07:07.333 10485.760 - 10536.172: 98.6941% ( 13) 00:07:07.333 10536.172 - 10586.585: 98.7682% ( 13) 00:07:07.333 10586.585 - 10636.997: 98.8253% ( 10) 00:07:07.333 10636.997 - 10687.409: 98.8766% ( 9) 00:07:07.333 10687.409 - 10737.822: 98.9507% ( 13) 00:07:07.333 10737.822 - 10788.234: 99.0135% ( 11) 00:07:07.333 10788.234 - 10838.646: 99.0705% ( 10) 00:07:07.333 10838.646 - 10889.058: 99.1047% ( 6) 00:07:07.333 10889.058 - 10939.471: 99.1389% ( 6) 00:07:07.333 10939.471 - 10989.883: 99.1731% ( 6) 00:07:07.333 10989.883 - 11040.295: 99.1902% ( 3) 00:07:07.333 11040.295 - 11090.708: 99.2016% ( 2) 00:07:07.333 11090.708 - 11141.120: 99.2130% ( 2) 00:07:07.333 11141.120 - 11191.532: 99.2245% ( 2) 00:07:07.333 11191.532 - 11241.945: 99.2359% ( 2) 00:07:07.333 11241.945 - 11292.357: 99.2473% ( 2) 00:07:07.333 11292.357 - 11342.769: 99.2587% ( 2) 00:07:07.333 11342.769 - 11393.182: 99.2701% ( 2) 00:07:07.333 21475.643 - 21576.468: 99.2872% ( 3) 00:07:07.333 21576.468 - 21677.292: 99.3043% ( 3) 00:07:07.333 21677.292 - 21778.117: 99.3271% ( 4) 00:07:07.333 21778.117 - 21878.942: 99.3499% ( 4) 00:07:07.333 21878.942 - 21979.766: 99.4183% ( 12) 00:07:07.333 21979.766 - 22080.591: 99.4583% ( 7) 00:07:07.333 22080.591 - 22181.415: 99.4697% ( 2) 00:07:07.333 22181.415 - 22282.240: 99.4925% ( 4) 00:07:07.333 22282.240 - 22383.065: 99.5096% ( 3) 00:07:07.333 22383.065 - 22483.889: 99.5267% ( 3) 00:07:07.333 22483.889 - 22584.714: 99.5438% ( 3) 00:07:07.333 22584.714 - 22685.538: 99.5609% ( 3) 00:07:07.333 22685.538 - 22786.363: 99.5780% ( 3) 00:07:07.333 22786.363 - 22887.188: 99.5951% ( 3) 00:07:07.333 22887.188 - 22988.012: 99.6122% ( 3) 00:07:07.333 22988.012 - 23088.837: 99.6293% ( 3) 00:07:07.333 23088.837 - 23189.662: 99.6350% ( 1) 00:07:07.333 27827.594 - 28029.243: 99.6635% ( 5) 00:07:07.333 28029.243 - 28230.892: 99.7149% ( 9) 00:07:07.333 28230.892 - 28432.542: 99.7263% ( 2) 00:07:07.333 29037.489 - 29239.138: 99.7662% ( 7) 00:07:07.333 29239.138 - 29440.788: 99.8004% ( 6) 00:07:07.333 29440.788 - 29642.437: 99.8289% ( 5) 00:07:07.333 29642.437 - 29844.086: 99.8688% ( 7) 00:07:07.333 29844.086 - 30045.735: 99.8917% ( 4) 00:07:07.333 30045.735 - 30247.385: 99.9316% ( 7) 00:07:07.333 30247.385 - 30449.034: 99.9715% ( 7) 00:07:07.333 30449.034 - 30650.683: 100.0000% ( 5) 00:07:07.333 00:07:07.333 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:07.333 ============================================================================== 00:07:07.333 Range in us Cumulative IO count 00:07:07.333 5898.240 - 5923.446: 0.0057% ( 1) 00:07:07.333 5923.446 - 5948.652: 0.0114% ( 1) 00:07:07.333 6024.271 - 6049.477: 0.0228% ( 2) 00:07:07.333 6049.477 - 6074.683: 0.0513% ( 5) 00:07:07.333 6074.683 - 6099.889: 0.1141% ( 11) 00:07:07.333 6099.889 - 6125.095: 0.1711% ( 10) 00:07:07.333 6125.095 - 6150.302: 0.2566% ( 15) 00:07:07.333 6150.302 - 6175.508: 0.3250% ( 12) 00:07:07.333 6175.508 - 6200.714: 0.3935% ( 12) 00:07:07.333 6200.714 - 6225.920: 0.4733% ( 14) 00:07:07.333 6225.920 - 6251.126: 0.5817% ( 19) 00:07:07.333 6251.126 - 6276.332: 0.7413% ( 28) 00:07:07.333 6276.332 - 6301.538: 0.9181% ( 31) 00:07:07.333 6301.538 - 6326.745: 1.1975% ( 49) 00:07:07.333 6326.745 - 6351.951: 1.5283% ( 58) 00:07:07.333 6351.951 - 6377.157: 1.8305% ( 53) 00:07:07.333 6377.157 - 6402.363: 2.2069% ( 66) 00:07:07.333 6402.363 - 6427.569: 2.6745% ( 82) 00:07:07.333 6427.569 - 6452.775: 3.1820% ( 89) 00:07:07.333 6452.775 - 6503.188: 4.9270% ( 306) 00:07:07.333 6503.188 - 6553.600: 7.8866% ( 519) 00:07:07.333 6553.600 - 6604.012: 11.5249% ( 638) 00:07:07.333 6604.012 - 6654.425: 16.4747% ( 868) 00:07:07.333 6654.425 - 6704.837: 23.0098% ( 1146) 00:07:07.333 6704.837 - 6755.249: 30.6113% ( 1333) 00:07:07.333 6755.249 - 6805.662: 38.6291% ( 1406) 00:07:07.333 6805.662 - 6856.074: 44.4457% ( 1020) 00:07:07.333 6856.074 - 6906.486: 50.7755% ( 1110) 00:07:07.333 6906.486 - 6956.898: 56.1302% ( 939) 00:07:07.333 6956.898 - 7007.311: 60.5383% ( 773) 00:07:07.333 7007.311 - 7057.723: 63.8116% ( 574) 00:07:07.333 7057.723 - 7108.135: 66.4405% ( 461) 00:07:07.333 7108.135 - 7158.548: 69.1720% ( 479) 00:07:07.333 7158.548 - 7208.960: 70.9398% ( 310) 00:07:07.333 7208.960 - 7259.372: 72.8901% ( 342) 00:07:07.333 7259.372 - 7309.785: 74.5153% ( 285) 00:07:07.333 7309.785 - 7360.197: 75.7356% ( 214) 00:07:07.333 7360.197 - 7410.609: 76.7336% ( 175) 00:07:07.333 7410.609 - 7461.022: 77.7258% ( 174) 00:07:07.333 7461.022 - 7511.434: 78.9519% ( 215) 00:07:07.333 7511.434 - 7561.846: 80.6056% ( 290) 00:07:07.333 7561.846 - 7612.258: 81.7689% ( 204) 00:07:07.333 7612.258 - 7662.671: 82.4932% ( 127) 00:07:07.333 7662.671 - 7713.083: 83.5082% ( 178) 00:07:07.333 7713.083 - 7763.495: 84.2781% ( 135) 00:07:07.333 7763.495 - 7813.908: 85.2190% ( 165) 00:07:07.333 7813.908 - 7864.320: 85.9546% ( 129) 00:07:07.333 7864.320 - 7914.732: 86.7701% ( 143) 00:07:07.333 7914.732 - 7965.145: 87.5912% ( 144) 00:07:07.333 7965.145 - 8015.557: 88.3839% ( 139) 00:07:07.333 8015.557 - 8065.969: 89.2621% ( 154) 00:07:07.333 8065.969 - 8116.382: 90.0205% ( 133) 00:07:07.333 8116.382 - 8166.794: 90.5623% ( 95) 00:07:07.333 8166.794 - 8217.206: 90.9101% ( 61) 00:07:07.333 8217.206 - 8267.618: 91.2124% ( 53) 00:07:07.333 8267.618 - 8318.031: 91.4975% ( 50) 00:07:07.333 8318.031 - 8368.443: 91.8510% ( 62) 00:07:07.333 8368.443 - 8418.855: 92.1989% ( 61) 00:07:07.333 8418.855 - 8469.268: 92.5125% ( 55) 00:07:07.333 8469.268 - 8519.680: 92.7977% ( 50) 00:07:07.333 8519.680 - 8570.092: 93.0885% ( 51) 00:07:07.333 8570.092 - 8620.505: 93.4307% ( 60) 00:07:07.333 8620.505 - 8670.917: 93.6359% ( 36) 00:07:07.333 8670.917 - 8721.329: 93.8298% ( 34) 00:07:07.333 8721.329 - 8771.742: 94.0180% ( 33) 00:07:07.333 8771.742 - 8822.154: 94.2746% ( 45) 00:07:07.333 8822.154 - 8872.566: 94.5255% ( 44) 00:07:07.333 8872.566 - 8922.978: 94.8506% ( 57) 00:07:07.333 8922.978 - 8973.391: 95.0388% ( 33) 00:07:07.333 8973.391 - 9023.803: 95.2099% ( 30) 00:07:07.333 9023.803 - 9074.215: 95.4266% ( 38) 00:07:07.333 9074.215 - 9124.628: 95.5919% ( 29) 00:07:07.333 9124.628 - 9175.040: 95.8371% ( 43) 00:07:07.333 9175.040 - 9225.452: 96.0481% ( 37) 00:07:07.333 9225.452 - 9275.865: 96.2249% ( 31) 00:07:07.333 9275.865 - 9326.277: 96.3789% ( 27) 00:07:07.333 9326.277 - 9376.689: 96.4758% ( 17) 00:07:07.333 9376.689 - 9427.102: 96.5557% ( 14) 00:07:07.333 9427.102 - 9477.514: 96.6697% ( 20) 00:07:07.333 9477.514 - 9527.926: 96.7609% ( 16) 00:07:07.333 9527.926 - 9578.338: 96.8693% ( 19) 00:07:07.333 9578.338 - 9628.751: 96.9320% ( 11) 00:07:07.333 9628.751 - 9679.163: 97.0119% ( 14) 00:07:07.333 9679.163 - 9729.575: 97.1316% ( 21) 00:07:07.333 9729.575 - 9779.988: 97.3027% ( 30) 00:07:07.333 9779.988 - 9830.400: 97.4453% ( 25) 00:07:07.333 9830.400 - 9880.812: 97.6391% ( 34) 00:07:07.333 9880.812 - 9931.225: 97.7760% ( 24) 00:07:07.333 9931.225 - 9981.637: 97.8729% ( 17) 00:07:07.333 9981.637 - 10032.049: 97.9642% ( 16) 00:07:07.333 10032.049 - 10082.462: 98.0554% ( 16) 00:07:07.333 10082.462 - 10132.874: 98.1467% ( 16) 00:07:07.333 10132.874 - 10183.286: 98.2436% ( 17) 00:07:07.333 10183.286 - 10233.698: 98.3120% ( 12) 00:07:07.333 10233.698 - 10284.111: 98.3748% ( 11) 00:07:07.333 10284.111 - 10334.523: 98.4489% ( 13) 00:07:07.333 10334.523 - 10384.935: 98.5059% ( 10) 00:07:07.333 10384.935 - 10435.348: 98.5858% ( 14) 00:07:07.333 10435.348 - 10485.760: 98.7797% ( 34) 00:07:07.333 10485.760 - 10536.172: 98.8481% ( 12) 00:07:07.333 10536.172 - 10586.585: 98.9051% ( 10) 00:07:07.333 10586.585 - 10636.997: 98.9507% ( 8) 00:07:07.333 10636.997 - 10687.409: 98.9906% ( 7) 00:07:07.333 10687.409 - 10737.822: 99.0249% ( 6) 00:07:07.333 10737.822 - 10788.234: 99.0420% ( 3) 00:07:07.333 10788.234 - 10838.646: 99.0591% ( 3) 00:07:07.333 10838.646 - 10889.058: 99.0762% ( 3) 00:07:07.333 10889.058 - 10939.471: 99.1218% ( 8) 00:07:07.333 10939.471 - 10989.883: 99.1560% ( 6) 00:07:07.333 10989.883 - 11040.295: 99.1845% ( 5) 00:07:07.333 11040.295 - 11090.708: 99.2016% ( 3) 00:07:07.333 11090.708 - 11141.120: 99.2130% ( 2) 00:07:07.333 11141.120 - 11191.532: 99.2188% ( 1) 00:07:07.333 11191.532 - 11241.945: 99.2302% ( 2) 00:07:07.333 11241.945 - 11292.357: 99.2359% ( 1) 00:07:07.333 11292.357 - 11342.769: 99.2473% ( 2) 00:07:07.333 11342.769 - 11393.182: 99.2587% ( 2) 00:07:07.333 11393.182 - 11443.594: 99.2701% ( 2) 00:07:07.334 19862.449 - 19963.274: 99.2758% ( 1) 00:07:07.334 19963.274 - 20064.098: 99.2815% ( 1) 00:07:07.334 20265.748 - 20366.572: 99.3043% ( 4) 00:07:07.334 20366.572 - 20467.397: 99.3442% ( 7) 00:07:07.334 20467.397 - 20568.222: 99.3727% ( 5) 00:07:07.334 20568.222 - 20669.046: 99.4069% ( 6) 00:07:07.334 20669.046 - 20769.871: 99.4240% ( 3) 00:07:07.334 20769.871 - 20870.695: 99.4411% ( 3) 00:07:07.334 20870.695 - 20971.520: 99.4640% ( 4) 00:07:07.334 20971.520 - 21072.345: 99.4925% ( 5) 00:07:07.334 21072.345 - 21173.169: 99.5039% ( 2) 00:07:07.334 21173.169 - 21273.994: 99.5210% ( 3) 00:07:07.334 21273.994 - 21374.818: 99.5324% ( 2) 00:07:07.334 21374.818 - 21475.643: 99.5495% ( 3) 00:07:07.334 21475.643 - 21576.468: 99.5609% ( 2) 00:07:07.334 21576.468 - 21677.292: 99.5723% ( 2) 00:07:07.334 21677.292 - 21778.117: 99.5837% ( 2) 00:07:07.334 21778.117 - 21878.942: 99.5951% ( 2) 00:07:07.334 21878.942 - 21979.766: 99.6065% ( 2) 00:07:07.334 21979.766 - 22080.591: 99.6179% ( 2) 00:07:07.334 22080.591 - 22181.415: 99.6350% ( 3) 00:07:07.334 26416.049 - 26617.698: 99.7548% ( 21) 00:07:07.334 26617.698 - 26819.348: 99.7719% ( 3) 00:07:07.334 27625.945 - 27827.594: 99.8004% ( 5) 00:07:07.334 27827.594 - 28029.243: 99.8346% ( 6) 00:07:07.334 28029.243 - 28230.892: 99.8745% ( 7) 00:07:07.334 28230.892 - 28432.542: 99.9145% ( 7) 00:07:07.334 28432.542 - 28634.191: 99.9544% ( 7) 00:07:07.334 28634.191 - 28835.840: 99.9943% ( 7) 00:07:07.334 28835.840 - 29037.489: 100.0000% ( 1) 00:07:07.334 00:07:07.334 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:07.334 ============================================================================== 00:07:07.334 Range in us Cumulative IO count 00:07:07.334 6024.271 - 6049.477: 0.0057% ( 1) 00:07:07.334 6049.477 - 6074.683: 0.0228% ( 3) 00:07:07.334 6074.683 - 6099.889: 0.0684% ( 8) 00:07:07.334 6099.889 - 6125.095: 0.1198% ( 9) 00:07:07.334 6125.095 - 6150.302: 0.1654% ( 8) 00:07:07.334 6150.302 - 6175.508: 0.2395% ( 13) 00:07:07.334 6175.508 - 6200.714: 0.3079% ( 12) 00:07:07.334 6200.714 - 6225.920: 0.3821% ( 13) 00:07:07.334 6225.920 - 6251.126: 0.4733% ( 16) 00:07:07.334 6251.126 - 6276.332: 0.6786% ( 36) 00:07:07.334 6276.332 - 6301.538: 0.8782% ( 35) 00:07:07.334 6301.538 - 6326.745: 1.2546% ( 66) 00:07:07.334 6326.745 - 6351.951: 1.5568% ( 53) 00:07:07.334 6351.951 - 6377.157: 1.8305% ( 48) 00:07:07.334 6377.157 - 6402.363: 2.2183% ( 68) 00:07:07.334 6402.363 - 6427.569: 2.7258% ( 89) 00:07:07.334 6427.569 - 6452.775: 3.3588% ( 111) 00:07:07.334 6452.775 - 6503.188: 5.0981% ( 305) 00:07:07.334 6503.188 - 6553.600: 8.1033% ( 527) 00:07:07.334 6553.600 - 6604.012: 12.5912% ( 787) 00:07:07.334 6604.012 - 6654.425: 17.4498% ( 852) 00:07:07.334 6654.425 - 6704.837: 22.8729% ( 951) 00:07:07.334 6704.837 - 6755.249: 30.0068% ( 1251) 00:07:07.334 6755.249 - 6805.662: 37.4202% ( 1300) 00:07:07.334 6805.662 - 6856.074: 44.0180% ( 1157) 00:07:07.334 6856.074 - 6906.486: 49.8574% ( 1024) 00:07:07.334 6906.486 - 6956.898: 54.9099% ( 886) 00:07:07.334 6956.898 - 7007.311: 60.0821% ( 907) 00:07:07.334 7007.311 - 7057.723: 63.9941% ( 686) 00:07:07.334 7057.723 - 7108.135: 66.7484% ( 483) 00:07:07.334 7108.135 - 7158.548: 68.9610% ( 388) 00:07:07.334 7158.548 - 7208.960: 71.3903% ( 426) 00:07:07.334 7208.960 - 7259.372: 73.0839% ( 297) 00:07:07.334 7259.372 - 7309.785: 74.2758% ( 209) 00:07:07.334 7309.785 - 7360.197: 75.5360% ( 221) 00:07:07.334 7360.197 - 7410.609: 76.9674% ( 251) 00:07:07.334 7410.609 - 7461.022: 77.8969% ( 163) 00:07:07.334 7461.022 - 7511.434: 79.3339% ( 252) 00:07:07.334 7511.434 - 7561.846: 80.2635% ( 163) 00:07:07.334 7561.846 - 7612.258: 81.1531% ( 156) 00:07:07.334 7612.258 - 7662.671: 82.1225% ( 170) 00:07:07.334 7662.671 - 7713.083: 83.1718% ( 184) 00:07:07.334 7713.083 - 7763.495: 84.1697% ( 175) 00:07:07.334 7763.495 - 7813.908: 85.1562% ( 173) 00:07:07.334 7813.908 - 7864.320: 86.1998% ( 183) 00:07:07.334 7864.320 - 7914.732: 86.9354% ( 129) 00:07:07.334 7914.732 - 7965.145: 88.0646% ( 198) 00:07:07.334 7965.145 - 8015.557: 88.8914% ( 145) 00:07:07.334 8015.557 - 8065.969: 89.3647% ( 83) 00:07:07.334 8065.969 - 8116.382: 89.9407% ( 101) 00:07:07.334 8116.382 - 8166.794: 90.3741% ( 76) 00:07:07.334 8166.794 - 8217.206: 90.7219% ( 61) 00:07:07.334 8217.206 - 8267.618: 91.2295% ( 89) 00:07:07.334 8267.618 - 8318.031: 91.5602% ( 58) 00:07:07.334 8318.031 - 8368.443: 91.9195% ( 63) 00:07:07.334 8368.443 - 8418.855: 92.2160% ( 52) 00:07:07.334 8418.855 - 8469.268: 92.5125% ( 52) 00:07:07.334 8469.268 - 8519.680: 92.8376% ( 57) 00:07:07.334 8519.680 - 8570.092: 93.0771% ( 42) 00:07:07.334 8570.092 - 8620.505: 93.3850% ( 54) 00:07:07.334 8620.505 - 8670.917: 93.7386% ( 62) 00:07:07.334 8670.917 - 8721.329: 94.0294% ( 51) 00:07:07.334 8721.329 - 8771.742: 94.2347% ( 36) 00:07:07.334 8771.742 - 8822.154: 94.4514% ( 38) 00:07:07.334 8822.154 - 8872.566: 94.5997% ( 26) 00:07:07.334 8872.566 - 8922.978: 94.8050% ( 36) 00:07:07.334 8922.978 - 8973.391: 94.9760% ( 30) 00:07:07.334 8973.391 - 9023.803: 95.1471% ( 30) 00:07:07.334 9023.803 - 9074.215: 95.3581% ( 37) 00:07:07.334 9074.215 - 9124.628: 95.6832% ( 57) 00:07:07.334 9124.628 - 9175.040: 95.8257% ( 25) 00:07:07.334 9175.040 - 9225.452: 96.0880% ( 46) 00:07:07.334 9225.452 - 9275.865: 96.3104% ( 39) 00:07:07.334 9275.865 - 9326.277: 96.5271% ( 38) 00:07:07.334 9326.277 - 9376.689: 96.8579% ( 58) 00:07:07.334 9376.689 - 9427.102: 97.0404% ( 32) 00:07:07.334 9427.102 - 9477.514: 97.2115% ( 30) 00:07:07.334 9477.514 - 9527.926: 97.3255% ( 20) 00:07:07.334 9527.926 - 9578.338: 97.4167% ( 16) 00:07:07.334 9578.338 - 9628.751: 97.5023% ( 15) 00:07:07.334 9628.751 - 9679.163: 97.5707% ( 12) 00:07:07.334 9679.163 - 9729.575: 97.6505% ( 14) 00:07:07.334 9729.575 - 9779.988: 97.7646% ( 20) 00:07:07.334 9779.988 - 9830.400: 97.9015% ( 24) 00:07:07.334 9830.400 - 9880.812: 97.9870% ( 15) 00:07:07.334 9880.812 - 9931.225: 98.0440% ( 10) 00:07:07.334 9931.225 - 9981.637: 98.1239% ( 14) 00:07:07.334 9981.637 - 10032.049: 98.1866% ( 11) 00:07:07.334 10032.049 - 10082.462: 98.2550% ( 12) 00:07:07.334 10082.462 - 10132.874: 98.3006% ( 8) 00:07:07.334 10132.874 - 10183.286: 98.3406% ( 7) 00:07:07.334 10183.286 - 10233.698: 98.4090% ( 12) 00:07:07.334 10233.698 - 10284.111: 98.4774% ( 12) 00:07:07.334 10284.111 - 10334.523: 98.5458% ( 12) 00:07:07.334 10334.523 - 10384.935: 98.5858% ( 7) 00:07:07.334 10384.935 - 10435.348: 98.6200% ( 6) 00:07:07.334 10435.348 - 10485.760: 98.6827% ( 11) 00:07:07.334 10485.760 - 10536.172: 98.7854% ( 18) 00:07:07.334 10536.172 - 10586.585: 98.8823% ( 17) 00:07:07.334 10586.585 - 10636.997: 98.9336% ( 9) 00:07:07.334 10636.997 - 10687.409: 98.9792% ( 8) 00:07:07.334 10687.409 - 10737.822: 99.0078% ( 5) 00:07:07.334 10737.822 - 10788.234: 99.0363% ( 5) 00:07:07.334 10788.234 - 10838.646: 99.0534% ( 3) 00:07:07.334 10838.646 - 10889.058: 99.0591% ( 1) 00:07:07.334 10889.058 - 10939.471: 99.0648% ( 1) 00:07:07.334 10939.471 - 10989.883: 99.0705% ( 1) 00:07:07.334 10989.883 - 11040.295: 99.0819% ( 2) 00:07:07.334 11040.295 - 11090.708: 99.0876% ( 1) 00:07:07.334 11090.708 - 11141.120: 99.0933% ( 1) 00:07:07.334 11141.120 - 11191.532: 99.1047% ( 2) 00:07:07.334 11191.532 - 11241.945: 99.1161% ( 2) 00:07:07.334 11241.945 - 11292.357: 99.1275% ( 2) 00:07:07.334 11292.357 - 11342.769: 99.1389% ( 2) 00:07:07.334 11342.769 - 11393.182: 99.1503% ( 2) 00:07:07.334 11393.182 - 11443.594: 99.1617% ( 2) 00:07:07.334 11443.594 - 11494.006: 99.1731% ( 2) 00:07:07.334 11494.006 - 11544.418: 99.1902% ( 3) 00:07:07.334 11544.418 - 11594.831: 99.2016% ( 2) 00:07:07.334 11594.831 - 11645.243: 99.2130% ( 2) 00:07:07.334 11645.243 - 11695.655: 99.2302% ( 3) 00:07:07.334 11695.655 - 11746.068: 99.2416% ( 2) 00:07:07.334 11746.068 - 11796.480: 99.2530% ( 2) 00:07:07.334 11796.480 - 11846.892: 99.2644% ( 2) 00:07:07.334 11846.892 - 11897.305: 99.2701% ( 1) 00:07:07.334 18450.905 - 18551.729: 99.2758% ( 1) 00:07:07.334 19156.677 - 19257.502: 99.2815% ( 1) 00:07:07.334 19257.502 - 19358.326: 99.2929% ( 2) 00:07:07.334 19358.326 - 19459.151: 99.3100% ( 3) 00:07:07.334 19459.151 - 19559.975: 99.3271% ( 3) 00:07:07.334 19559.975 - 19660.800: 99.3442% ( 3) 00:07:07.334 19660.800 - 19761.625: 99.3613% ( 3) 00:07:07.334 19761.625 - 19862.449: 99.3784% ( 3) 00:07:07.334 19862.449 - 19963.274: 99.3955% ( 3) 00:07:07.334 19963.274 - 20064.098: 99.4126% ( 3) 00:07:07.334 20064.098 - 20164.923: 99.4297% ( 3) 00:07:07.334 20164.923 - 20265.748: 99.4526% ( 4) 00:07:07.334 20265.748 - 20366.572: 99.4640% ( 2) 00:07:07.334 20366.572 - 20467.397: 99.4868% ( 4) 00:07:07.334 20467.397 - 20568.222: 99.5039% ( 3) 00:07:07.334 20568.222 - 20669.046: 99.5210% ( 3) 00:07:07.334 20669.046 - 20769.871: 99.5324% ( 2) 00:07:07.334 20769.871 - 20870.695: 99.5438% ( 2) 00:07:07.334 20870.695 - 20971.520: 99.5552% ( 2) 00:07:07.334 20971.520 - 21072.345: 99.5609% ( 1) 00:07:07.334 21072.345 - 21173.169: 99.5780% ( 3) 00:07:07.334 21173.169 - 21273.994: 99.5951% ( 3) 00:07:07.334 21273.994 - 21374.818: 99.6008% ( 1) 00:07:07.334 21374.818 - 21475.643: 99.6179% ( 3) 00:07:07.334 21475.643 - 21576.468: 99.6350% ( 3) 00:07:07.334 24802.855 - 24903.680: 99.6521% ( 3) 00:07:07.334 24903.680 - 25004.505: 99.6807% ( 5) 00:07:07.334 25004.505 - 25105.329: 99.7719% ( 16) 00:07:07.334 25105.329 - 25206.154: 99.8004% ( 5) 00:07:07.334 26012.751 - 26214.400: 99.8346% ( 6) 00:07:07.334 26214.400 - 26416.049: 99.8802% ( 8) 00:07:07.334 26416.049 - 26617.698: 99.9259% ( 8) 00:07:07.334 26617.698 - 26819.348: 99.9658% ( 7) 00:07:07.334 26819.348 - 27020.997: 100.0000% ( 6) 00:07:07.334 00:07:07.334 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:07.334 ============================================================================== 00:07:07.334 Range in us Cumulative IO count 00:07:07.334 5923.446 - 5948.652: 0.0057% ( 1) 00:07:07.334 5948.652 - 5973.858: 0.0114% ( 1) 00:07:07.334 5973.858 - 5999.065: 0.0171% ( 1) 00:07:07.334 6024.271 - 6049.477: 0.0399% ( 4) 00:07:07.334 6049.477 - 6074.683: 0.0627% ( 4) 00:07:07.334 6074.683 - 6099.889: 0.1026% ( 7) 00:07:07.334 6099.889 - 6125.095: 0.1597% ( 10) 00:07:07.334 6125.095 - 6150.302: 0.2224% ( 11) 00:07:07.334 6150.302 - 6175.508: 0.2965% ( 13) 00:07:07.334 6175.508 - 6200.714: 0.3764% ( 14) 00:07:07.334 6200.714 - 6225.920: 0.4676% ( 16) 00:07:07.334 6225.920 - 6251.126: 0.6102% ( 25) 00:07:07.334 6251.126 - 6276.332: 0.8497% ( 42) 00:07:07.334 6276.332 - 6301.538: 1.1291% ( 49) 00:07:07.334 6301.538 - 6326.745: 1.3059% ( 31) 00:07:07.334 6326.745 - 6351.951: 1.6366% ( 58) 00:07:07.334 6351.951 - 6377.157: 1.9104% ( 48) 00:07:07.334 6377.157 - 6402.363: 2.2696% ( 63) 00:07:07.334 6402.363 - 6427.569: 2.7714% ( 88) 00:07:07.334 6427.569 - 6452.775: 3.5071% ( 129) 00:07:07.334 6452.775 - 6503.188: 5.4288% ( 337) 00:07:07.334 6503.188 - 6553.600: 7.9665% ( 445) 00:07:07.334 6553.600 - 6604.012: 11.9297% ( 695) 00:07:07.334 6604.012 - 6654.425: 16.4348% ( 790) 00:07:07.334 6654.425 - 6704.837: 23.2835% ( 1201) 00:07:07.334 6704.837 - 6755.249: 29.8814% ( 1157) 00:07:07.334 6755.249 - 6805.662: 37.9277% ( 1411) 00:07:07.334 6805.662 - 6856.074: 43.7671% ( 1024) 00:07:07.334 6856.074 - 6906.486: 49.7605% ( 1051) 00:07:07.334 6906.486 - 6956.898: 56.1359% ( 1118) 00:07:07.334 6956.898 - 7007.311: 60.3159% ( 733) 00:07:07.334 7007.311 - 7057.723: 63.7489% ( 602) 00:07:07.334 7057.723 - 7108.135: 66.1154% ( 415) 00:07:07.334 7108.135 - 7158.548: 69.1207% ( 527) 00:07:07.334 7158.548 - 7208.960: 71.5328% ( 423) 00:07:07.334 7208.960 - 7259.372: 72.9129% ( 242) 00:07:07.334 7259.372 - 7309.785: 74.3043% ( 244) 00:07:07.334 7309.785 - 7360.197: 75.8497% ( 271) 00:07:07.334 7360.197 - 7410.609: 77.2354% ( 243) 00:07:07.334 7410.609 - 7461.022: 78.5356% ( 228) 00:07:07.334 7461.022 - 7511.434: 79.8016% ( 222) 00:07:07.334 7511.434 - 7561.846: 80.5657% ( 134) 00:07:07.334 7561.846 - 7612.258: 81.4211% ( 150) 00:07:07.334 7612.258 - 7662.671: 82.1624% ( 130) 00:07:07.334 7662.671 - 7713.083: 83.1204% ( 168) 00:07:07.334 7713.083 - 7763.495: 84.2381% ( 196) 00:07:07.334 7763.495 - 7813.908: 85.0422% ( 141) 00:07:07.334 7813.908 - 7864.320: 85.7949% ( 132) 00:07:07.334 7864.320 - 7914.732: 86.4678% ( 118) 00:07:07.334 7914.732 - 7965.145: 87.1521% ( 120) 00:07:07.334 7965.145 - 8015.557: 88.1615% ( 177) 00:07:07.334 8015.557 - 8065.969: 89.1766% ( 178) 00:07:07.334 8065.969 - 8116.382: 89.8438% ( 117) 00:07:07.334 8116.382 - 8166.794: 90.2771% ( 76) 00:07:07.334 8166.794 - 8217.206: 90.6934% ( 73) 00:07:07.334 8217.206 - 8267.618: 91.0698% ( 66) 00:07:07.334 8267.618 - 8318.031: 91.4062% ( 59) 00:07:07.334 8318.031 - 8368.443: 91.7712% ( 64) 00:07:07.334 8368.443 - 8418.855: 92.3358% ( 99) 00:07:07.334 8418.855 - 8469.268: 92.7406% ( 71) 00:07:07.334 8469.268 - 8519.680: 93.0828% ( 60) 00:07:07.334 8519.680 - 8570.092: 93.4934% ( 72) 00:07:07.334 8570.092 - 8620.505: 93.8013% ( 54) 00:07:07.334 8620.505 - 8670.917: 93.9838% ( 32) 00:07:07.334 8670.917 - 8721.329: 94.1663% ( 32) 00:07:07.334 8721.329 - 8771.742: 94.4229% ( 45) 00:07:07.334 8771.742 - 8822.154: 94.6282% ( 36) 00:07:07.334 8822.154 - 8872.566: 94.7479% ( 21) 00:07:07.334 8872.566 - 8922.978: 94.9076% ( 28) 00:07:07.334 8922.978 - 8973.391: 95.1186% ( 37) 00:07:07.334 8973.391 - 9023.803: 95.3011% ( 32) 00:07:07.334 9023.803 - 9074.215: 95.5748% ( 48) 00:07:07.334 9074.215 - 9124.628: 95.8314% ( 45) 00:07:07.334 9124.628 - 9175.040: 96.0709% ( 42) 00:07:07.334 9175.040 - 9225.452: 96.2534% ( 32) 00:07:07.334 9225.452 - 9275.865: 96.6412% ( 68) 00:07:07.334 9275.865 - 9326.277: 96.9035% ( 46) 00:07:07.334 9326.277 - 9376.689: 97.0233% ( 21) 00:07:07.334 9376.689 - 9427.102: 97.1886% ( 29) 00:07:07.334 9427.102 - 9477.514: 97.3084% ( 21) 00:07:07.334 9477.514 - 9527.926: 97.4909% ( 32) 00:07:07.334 9527.926 - 9578.338: 97.6505% ( 28) 00:07:07.334 9578.338 - 9628.751: 97.8045% ( 27) 00:07:07.334 9628.751 - 9679.163: 97.8901% ( 15) 00:07:07.334 9679.163 - 9729.575: 97.9585% ( 12) 00:07:07.334 9729.575 - 9779.988: 98.0269% ( 12) 00:07:07.334 9779.988 - 9830.400: 98.0839% ( 10) 00:07:07.334 9830.400 - 9880.812: 98.1410% ( 10) 00:07:07.334 9880.812 - 9931.225: 98.1638% ( 4) 00:07:07.334 9931.225 - 9981.637: 98.2265% ( 11) 00:07:07.334 9981.637 - 10032.049: 98.2721% ( 8) 00:07:07.334 10032.049 - 10082.462: 98.3234% ( 9) 00:07:07.334 10082.462 - 10132.874: 98.3577% ( 6) 00:07:07.334 10132.874 - 10183.286: 98.3805% ( 4) 00:07:07.334 10183.286 - 10233.698: 98.4147% ( 6) 00:07:07.334 10233.698 - 10284.111: 98.4489% ( 6) 00:07:07.334 10284.111 - 10334.523: 98.4774% ( 5) 00:07:07.334 10334.523 - 10384.935: 98.5116% ( 6) 00:07:07.334 10384.935 - 10435.348: 98.5858% ( 13) 00:07:07.334 10435.348 - 10485.760: 98.6827% ( 17) 00:07:07.334 10485.760 - 10536.172: 98.7226% ( 7) 00:07:07.334 10536.172 - 10586.585: 98.7511% ( 5) 00:07:07.334 10586.585 - 10636.997: 98.7740% ( 4) 00:07:07.334 10636.997 - 10687.409: 98.8139% ( 7) 00:07:07.334 10687.409 - 10737.822: 98.8367% ( 4) 00:07:07.334 10737.822 - 10788.234: 98.8595% ( 4) 00:07:07.334 10788.234 - 10838.646: 98.8823% ( 4) 00:07:07.334 10838.646 - 10889.058: 98.9051% ( 4) 00:07:07.334 10889.058 - 10939.471: 98.9279% ( 4) 00:07:07.334 10939.471 - 10989.883: 98.9564% ( 5) 00:07:07.334 10989.883 - 11040.295: 98.9792% ( 4) 00:07:07.334 11040.295 - 11090.708: 99.0021% ( 4) 00:07:07.334 11090.708 - 11141.120: 99.0249% ( 4) 00:07:07.334 11141.120 - 11191.532: 99.0534% ( 5) 00:07:07.334 11191.532 - 11241.945: 99.0762% ( 4) 00:07:07.334 11241.945 - 11292.357: 99.0990% ( 4) 00:07:07.334 11292.357 - 11342.769: 99.1161% ( 3) 00:07:07.334 11342.769 - 11393.182: 99.1275% ( 2) 00:07:07.334 11393.182 - 11443.594: 99.1389% ( 2) 00:07:07.334 11494.006 - 11544.418: 99.1503% ( 2) 00:07:07.334 11544.418 - 11594.831: 99.1617% ( 2) 00:07:07.334 11594.831 - 11645.243: 99.1731% ( 2) 00:07:07.334 11645.243 - 11695.655: 99.1845% ( 2) 00:07:07.334 11695.655 - 11746.068: 99.1959% ( 2) 00:07:07.334 11746.068 - 11796.480: 99.2073% ( 2) 00:07:07.334 11796.480 - 11846.892: 99.2188% ( 2) 00:07:07.334 11846.892 - 11897.305: 99.2302% ( 2) 00:07:07.334 11897.305 - 11947.717: 99.2473% ( 3) 00:07:07.334 11947.717 - 11998.129: 99.2587% ( 2) 00:07:07.334 11998.129 - 12048.542: 99.2701% ( 2) 00:07:07.334 18551.729 - 18652.554: 99.2929% ( 4) 00:07:07.334 18652.554 - 18753.378: 99.3100% ( 3) 00:07:07.334 18753.378 - 18854.203: 99.3157% ( 1) 00:07:07.334 18854.203 - 18955.028: 99.3385% ( 4) 00:07:07.334 18955.028 - 19055.852: 99.3556% ( 3) 00:07:07.335 19055.852 - 19156.677: 99.3898% ( 6) 00:07:07.335 19156.677 - 19257.502: 99.4183% ( 5) 00:07:07.335 19257.502 - 19358.326: 99.4469% ( 5) 00:07:07.335 19358.326 - 19459.151: 99.4583% ( 2) 00:07:07.335 19459.151 - 19559.975: 99.4754% ( 3) 00:07:07.335 19559.975 - 19660.800: 99.4925% ( 3) 00:07:07.335 19660.800 - 19761.625: 99.5096% ( 3) 00:07:07.335 19761.625 - 19862.449: 99.5267% ( 3) 00:07:07.335 19862.449 - 19963.274: 99.5495% ( 4) 00:07:07.335 19963.274 - 20064.098: 99.5666% ( 3) 00:07:07.335 20064.098 - 20164.923: 99.5894% ( 4) 00:07:07.335 20164.923 - 20265.748: 99.6065% ( 3) 00:07:07.335 20265.748 - 20366.572: 99.6236% ( 3) 00:07:07.335 20366.572 - 20467.397: 99.6350% ( 2) 00:07:07.335 23088.837 - 23189.662: 99.6978% ( 11) 00:07:07.335 23189.662 - 23290.486: 99.7320% ( 6) 00:07:07.335 23290.486 - 23391.311: 99.8118% ( 14) 00:07:07.335 23391.311 - 23492.135: 99.8175% ( 1) 00:07:07.335 24298.732 - 24399.557: 99.8289% ( 2) 00:07:07.335 24399.557 - 24500.382: 99.8517% ( 4) 00:07:07.335 24500.382 - 24601.206: 99.8745% ( 4) 00:07:07.335 24601.206 - 24702.031: 99.8974% ( 4) 00:07:07.335 24702.031 - 24802.855: 99.9202% ( 4) 00:07:07.335 24802.855 - 24903.680: 99.9373% ( 3) 00:07:07.335 24903.680 - 25004.505: 99.9601% ( 4) 00:07:07.335 25004.505 - 25105.329: 99.9829% ( 4) 00:07:07.335 25105.329 - 25206.154: 100.0000% ( 3) 00:07:07.335 00:07:07.604 02:55:01 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:07.604 00:07:07.604 real 0m2.536s 00:07:07.604 user 0m2.222s 00:07:07.604 sys 0m0.209s 00:07:07.604 02:55:01 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.604 02:55:01 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:07.604 ************************************ 00:07:07.604 END TEST nvme_perf 00:07:07.604 ************************************ 00:07:07.604 02:55:01 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:07.604 02:55:01 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:07.604 02:55:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.604 02:55:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.604 ************************************ 00:07:07.604 START TEST nvme_hello_world 00:07:07.604 ************************************ 00:07:07.604 02:55:01 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:07.861 Initializing NVMe Controllers 00:07:07.861 Attached to 0000:00:10.0 00:07:07.861 Namespace ID: 1 size: 6GB 00:07:07.861 Attached to 0000:00:11.0 00:07:07.861 Namespace ID: 1 size: 5GB 00:07:07.861 Attached to 0000:00:13.0 00:07:07.861 Namespace ID: 1 size: 1GB 00:07:07.861 Attached to 0000:00:12.0 00:07:07.861 Namespace ID: 1 size: 4GB 00:07:07.861 Namespace ID: 2 size: 4GB 00:07:07.861 Namespace ID: 3 size: 4GB 00:07:07.861 Initialization complete. 00:07:07.861 INFO: using host memory buffer for IO 00:07:07.861 Hello world! 00:07:07.861 INFO: using host memory buffer for IO 00:07:07.861 Hello world! 00:07:07.861 INFO: using host memory buffer for IO 00:07:07.861 Hello world! 00:07:07.861 INFO: using host memory buffer for IO 00:07:07.861 Hello world! 00:07:07.861 INFO: using host memory buffer for IO 00:07:07.861 Hello world! 00:07:07.861 INFO: using host memory buffer for IO 00:07:07.861 Hello world! 00:07:07.861 ************************************ 00:07:07.861 END TEST nvme_hello_world 00:07:07.861 ************************************ 00:07:07.861 00:07:07.861 real 0m0.263s 00:07:07.861 user 0m0.095s 00:07:07.861 sys 0m0.120s 00:07:07.861 02:55:02 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.861 02:55:02 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:07.861 02:55:02 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:07.861 02:55:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.861 02:55:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.861 02:55:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.861 ************************************ 00:07:07.861 START TEST nvme_sgl 00:07:07.861 ************************************ 00:07:07.861 02:55:02 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:08.120 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:08.120 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:08.120 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:08.120 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:08.120 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:08.120 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:08.120 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:08.120 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:08.120 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:08.120 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:08.120 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:08.120 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:08.120 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:08.120 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:08.120 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:08.120 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:08.120 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:08.120 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:08.120 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:08.120 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:08.120 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:08.120 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:08.120 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:08.120 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:08.120 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:08.120 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:08.120 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:08.120 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:08.120 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:08.120 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:08.120 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:08.120 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:08.120 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:08.120 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:08.120 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:08.120 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:08.120 NVMe Readv/Writev Request test 00:07:08.120 Attached to 0000:00:10.0 00:07:08.120 Attached to 0000:00:11.0 00:07:08.120 Attached to 0000:00:13.0 00:07:08.120 Attached to 0000:00:12.0 00:07:08.120 0000:00:10.0: build_io_request_2 test passed 00:07:08.120 0000:00:10.0: build_io_request_4 test passed 00:07:08.120 0000:00:10.0: build_io_request_5 test passed 00:07:08.120 0000:00:10.0: build_io_request_6 test passed 00:07:08.120 0000:00:10.0: build_io_request_7 test passed 00:07:08.120 0000:00:10.0: build_io_request_10 test passed 00:07:08.120 0000:00:11.0: build_io_request_2 test passed 00:07:08.120 0000:00:11.0: build_io_request_4 test passed 00:07:08.120 0000:00:11.0: build_io_request_5 test passed 00:07:08.120 0000:00:11.0: build_io_request_6 test passed 00:07:08.120 0000:00:11.0: build_io_request_7 test passed 00:07:08.120 0000:00:11.0: build_io_request_10 test passed 00:07:08.120 Cleaning up... 00:07:08.120 00:07:08.120 real 0m0.286s 00:07:08.120 user 0m0.145s 00:07:08.120 sys 0m0.095s 00:07:08.120 02:55:02 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.120 02:55:02 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:08.120 ************************************ 00:07:08.120 END TEST nvme_sgl 00:07:08.120 ************************************ 00:07:08.120 02:55:02 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:08.120 02:55:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.120 02:55:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.120 02:55:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.120 ************************************ 00:07:08.120 START TEST nvme_e2edp 00:07:08.120 ************************************ 00:07:08.120 02:55:02 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:08.378 NVMe Write/Read with End-to-End data protection test 00:07:08.378 Attached to 0000:00:10.0 00:07:08.378 Attached to 0000:00:11.0 00:07:08.378 Attached to 0000:00:13.0 00:07:08.378 Attached to 0000:00:12.0 00:07:08.378 Cleaning up... 00:07:08.378 00:07:08.378 real 0m0.198s 00:07:08.378 user 0m0.072s 00:07:08.378 sys 0m0.087s 00:07:08.378 02:55:02 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.378 02:55:02 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:08.378 ************************************ 00:07:08.378 END TEST nvme_e2edp 00:07:08.378 ************************************ 00:07:08.378 02:55:02 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:08.378 02:55:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.378 02:55:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.378 02:55:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.378 ************************************ 00:07:08.378 START TEST nvme_reserve 00:07:08.378 ************************************ 00:07:08.378 02:55:02 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:08.636 ===================================================== 00:07:08.636 NVMe Controller at PCI bus 0, device 16, function 0 00:07:08.636 ===================================================== 00:07:08.636 Reservations: Not Supported 00:07:08.636 ===================================================== 00:07:08.636 NVMe Controller at PCI bus 0, device 17, function 0 00:07:08.636 ===================================================== 00:07:08.636 Reservations: Not Supported 00:07:08.636 ===================================================== 00:07:08.636 NVMe Controller at PCI bus 0, device 19, function 0 00:07:08.636 ===================================================== 00:07:08.636 Reservations: Not Supported 00:07:08.636 ===================================================== 00:07:08.636 NVMe Controller at PCI bus 0, device 18, function 0 00:07:08.636 ===================================================== 00:07:08.636 Reservations: Not Supported 00:07:08.636 Reservation test passed 00:07:08.636 00:07:08.636 real 0m0.249s 00:07:08.636 user 0m0.087s 00:07:08.636 sys 0m0.115s 00:07:08.636 02:55:02 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.636 02:55:02 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:08.636 ************************************ 00:07:08.636 END TEST nvme_reserve 00:07:08.636 ************************************ 00:07:08.636 02:55:02 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:08.636 02:55:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.636 02:55:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.636 02:55:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.636 ************************************ 00:07:08.636 START TEST nvme_err_injection 00:07:08.636 ************************************ 00:07:08.636 02:55:02 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:08.894 NVMe Error Injection test 00:07:08.894 Attached to 0000:00:10.0 00:07:08.894 Attached to 0000:00:11.0 00:07:08.894 Attached to 0000:00:13.0 00:07:08.894 Attached to 0000:00:12.0 00:07:08.894 0000:00:13.0: get features failed as expected 00:07:08.894 0000:00:12.0: get features failed as expected 00:07:08.894 0000:00:10.0: get features failed as expected 00:07:08.894 0000:00:11.0: get features failed as expected 00:07:08.894 0000:00:10.0: get features successfully as expected 00:07:08.894 0000:00:11.0: get features successfully as expected 00:07:08.894 0000:00:13.0: get features successfully as expected 00:07:08.894 0000:00:12.0: get features successfully as expected 00:07:08.894 0000:00:10.0: read failed as expected 00:07:08.894 0000:00:11.0: read failed as expected 00:07:08.894 0000:00:13.0: read failed as expected 00:07:08.894 0000:00:12.0: read failed as expected 00:07:08.894 0000:00:10.0: read successfully as expected 00:07:08.894 0000:00:11.0: read successfully as expected 00:07:08.894 0000:00:13.0: read successfully as expected 00:07:08.894 0000:00:12.0: read successfully as expected 00:07:08.894 Cleaning up... 00:07:08.894 00:07:08.894 real 0m0.205s 00:07:08.894 user 0m0.077s 00:07:08.894 sys 0m0.087s 00:07:08.894 02:55:03 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.894 ************************************ 00:07:08.894 END TEST nvme_err_injection 00:07:08.894 02:55:03 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:08.894 ************************************ 00:07:08.894 02:55:03 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:08.894 02:55:03 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:08.894 02:55:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.894 02:55:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.894 ************************************ 00:07:08.894 START TEST nvme_overhead 00:07:08.894 ************************************ 00:07:08.894 02:55:03 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:10.268 Initializing NVMe Controllers 00:07:10.268 Attached to 0000:00:10.0 00:07:10.268 Attached to 0000:00:11.0 00:07:10.268 Attached to 0000:00:13.0 00:07:10.268 Attached to 0000:00:12.0 00:07:10.268 Initialization complete. Launching workers. 00:07:10.268 submit (in ns) avg, min, max = 11524.3, 9570.8, 70963.1 00:07:10.268 complete (in ns) avg, min, max = 7742.1, 7206.2, 429496.2 00:07:10.268 00:07:10.268 Submit histogram 00:07:10.268 ================ 00:07:10.268 Range in us Cumulative Count 00:07:10.268 9.551 - 9.600: 0.0059% ( 1) 00:07:10.268 9.945 - 9.994: 0.0119% ( 1) 00:07:10.268 9.994 - 10.043: 0.0178% ( 1) 00:07:10.268 10.043 - 10.092: 0.0297% ( 2) 00:07:10.268 10.142 - 10.191: 0.0356% ( 1) 00:07:10.268 10.191 - 10.240: 0.0415% ( 1) 00:07:10.268 10.240 - 10.289: 0.0474% ( 1) 00:07:10.268 10.289 - 10.338: 0.0534% ( 1) 00:07:10.268 10.437 - 10.486: 0.0593% ( 1) 00:07:10.268 10.634 - 10.683: 0.0771% ( 3) 00:07:10.268 10.683 - 10.732: 0.2253% ( 25) 00:07:10.268 10.732 - 10.782: 0.7709% ( 92) 00:07:10.268 10.782 - 10.831: 2.2772% ( 254) 00:07:10.268 10.831 - 10.880: 5.5210% ( 547) 00:07:10.268 10.880 - 10.929: 11.6112% ( 1027) 00:07:10.268 10.929 - 10.978: 20.3938% ( 1481) 00:07:10.268 10.978 - 11.028: 31.4713% ( 1868) 00:07:10.268 11.028 - 11.077: 42.1040% ( 1793) 00:07:10.268 11.077 - 11.126: 50.7502% ( 1458) 00:07:10.268 11.126 - 11.175: 56.8463% ( 1028) 00:07:10.268 11.175 - 11.225: 60.8136% ( 669) 00:07:10.268 11.225 - 11.274: 63.7728% ( 499) 00:07:10.268 11.274 - 11.323: 65.7712% ( 337) 00:07:10.268 11.323 - 11.372: 67.5621% ( 302) 00:07:10.268 11.372 - 11.422: 69.0684% ( 254) 00:07:10.268 11.422 - 11.471: 70.5924% ( 257) 00:07:10.268 11.471 - 11.520: 71.9682% ( 232) 00:07:10.268 11.520 - 11.569: 73.1780% ( 204) 00:07:10.268 11.569 - 11.618: 74.2632% ( 183) 00:07:10.268 11.618 - 11.668: 75.4729% ( 204) 00:07:10.268 11.668 - 11.717: 76.9495% ( 249) 00:07:10.268 11.717 - 11.766: 78.5685% ( 273) 00:07:10.268 11.766 - 11.815: 80.5610% ( 336) 00:07:10.268 11.815 - 11.865: 82.7314% ( 366) 00:07:10.268 11.865 - 11.914: 85.2280% ( 421) 00:07:10.268 11.914 - 11.963: 87.5111% ( 385) 00:07:10.268 11.963 - 12.012: 89.2902% ( 300) 00:07:10.268 12.012 - 12.062: 90.5889% ( 219) 00:07:10.268 12.062 - 12.111: 91.6859% ( 185) 00:07:10.268 12.111 - 12.160: 92.4331% ( 126) 00:07:10.268 12.160 - 12.209: 92.9194% ( 82) 00:07:10.268 12.209 - 12.258: 93.2278% ( 52) 00:07:10.268 12.258 - 12.308: 93.5006% ( 46) 00:07:10.268 12.308 - 12.357: 93.6785% ( 30) 00:07:10.268 12.357 - 12.406: 93.8208% ( 24) 00:07:10.268 12.406 - 12.455: 93.9216% ( 17) 00:07:10.268 12.455 - 12.505: 93.9809% ( 10) 00:07:10.268 12.505 - 12.554: 94.0402% ( 10) 00:07:10.268 12.554 - 12.603: 94.0580% ( 3) 00:07:10.268 12.603 - 12.702: 94.1469% ( 15) 00:07:10.268 12.702 - 12.800: 94.1944% ( 8) 00:07:10.268 12.800 - 12.898: 94.2537% ( 10) 00:07:10.268 12.898 - 12.997: 94.3071% ( 9) 00:07:10.268 12.997 - 13.095: 94.4435% ( 23) 00:07:10.268 13.095 - 13.194: 94.6154% ( 29) 00:07:10.268 13.194 - 13.292: 94.7815% ( 28) 00:07:10.268 13.292 - 13.391: 95.0187% ( 40) 00:07:10.268 13.391 - 13.489: 95.3093% ( 49) 00:07:10.268 13.489 - 13.588: 95.5524% ( 41) 00:07:10.268 13.588 - 13.686: 95.7955% ( 41) 00:07:10.268 13.686 - 13.785: 95.9497% ( 26) 00:07:10.268 13.785 - 13.883: 96.0683% ( 20) 00:07:10.268 13.883 - 13.982: 96.1869% ( 20) 00:07:10.268 13.982 - 14.080: 96.3352% ( 25) 00:07:10.268 14.080 - 14.178: 96.4597% ( 21) 00:07:10.268 14.178 - 14.277: 96.5605% ( 17) 00:07:10.268 14.277 - 14.375: 96.6910% ( 22) 00:07:10.268 14.375 - 14.474: 96.8748% ( 31) 00:07:10.268 14.474 - 14.572: 96.9875% ( 19) 00:07:10.268 14.572 - 14.671: 97.0883% ( 17) 00:07:10.268 14.671 - 14.769: 97.2188% ( 22) 00:07:10.268 14.769 - 14.868: 97.3136% ( 16) 00:07:10.268 14.868 - 14.966: 97.4026% ( 15) 00:07:10.268 14.966 - 15.065: 97.4856% ( 14) 00:07:10.268 15.065 - 15.163: 97.5686% ( 14) 00:07:10.268 15.163 - 15.262: 97.6398% ( 12) 00:07:10.268 15.262 - 15.360: 97.7347% ( 16) 00:07:10.268 15.360 - 15.458: 97.8177% ( 14) 00:07:10.268 15.458 - 15.557: 97.8770% ( 10) 00:07:10.268 15.557 - 15.655: 97.9600% ( 14) 00:07:10.268 15.655 - 15.754: 98.0312% ( 12) 00:07:10.268 15.754 - 15.852: 98.0786% ( 8) 00:07:10.268 15.852 - 15.951: 98.1083% ( 5) 00:07:10.268 15.951 - 16.049: 98.1676% ( 10) 00:07:10.268 16.049 - 16.148: 98.1913% ( 4) 00:07:10.268 16.148 - 16.246: 98.2625% ( 12) 00:07:10.268 16.246 - 16.345: 98.2862% ( 4) 00:07:10.268 16.345 - 16.443: 98.3099% ( 4) 00:07:10.268 16.443 - 16.542: 98.3574% ( 8) 00:07:10.268 16.542 - 16.640: 98.4107% ( 9) 00:07:10.268 16.640 - 16.738: 98.4641% ( 9) 00:07:10.268 16.738 - 16.837: 98.5175% ( 9) 00:07:10.268 16.837 - 16.935: 98.5827% ( 11) 00:07:10.268 16.935 - 17.034: 98.6301% ( 8) 00:07:10.269 17.034 - 17.132: 98.6894% ( 10) 00:07:10.269 17.132 - 17.231: 98.7665% ( 13) 00:07:10.269 17.231 - 17.329: 98.8140% ( 8) 00:07:10.269 17.329 - 17.428: 98.9029% ( 15) 00:07:10.269 17.428 - 17.526: 98.9800% ( 13) 00:07:10.269 17.526 - 17.625: 99.0156% ( 6) 00:07:10.269 17.625 - 17.723: 99.1283% ( 19) 00:07:10.269 17.723 - 17.822: 99.2113% ( 14) 00:07:10.269 17.822 - 17.920: 99.2587% ( 8) 00:07:10.269 17.920 - 18.018: 99.3240% ( 11) 00:07:10.269 18.018 - 18.117: 99.3595% ( 6) 00:07:10.269 18.117 - 18.215: 99.4070% ( 8) 00:07:10.269 18.215 - 18.314: 99.4544% ( 8) 00:07:10.269 18.314 - 18.412: 99.4841% ( 5) 00:07:10.269 18.412 - 18.511: 99.4959% ( 2) 00:07:10.269 18.511 - 18.609: 99.5137% ( 3) 00:07:10.269 18.609 - 18.708: 99.5374% ( 4) 00:07:10.269 18.708 - 18.806: 99.5612% ( 4) 00:07:10.269 18.806 - 18.905: 99.5730% ( 2) 00:07:10.269 18.905 - 19.003: 99.5849% ( 2) 00:07:10.269 19.003 - 19.102: 99.6086% ( 4) 00:07:10.269 19.102 - 19.200: 99.6145% ( 1) 00:07:10.269 19.200 - 19.298: 99.6205% ( 1) 00:07:10.269 19.298 - 19.397: 99.6323% ( 2) 00:07:10.269 19.397 - 19.495: 99.6442% ( 2) 00:07:10.269 19.495 - 19.594: 99.6561% ( 2) 00:07:10.269 19.594 - 19.692: 99.6620% ( 1) 00:07:10.269 19.692 - 19.791: 99.6679% ( 1) 00:07:10.269 19.791 - 19.889: 99.6738% ( 1) 00:07:10.269 19.988 - 20.086: 99.6798% ( 1) 00:07:10.269 20.185 - 20.283: 99.6857% ( 1) 00:07:10.269 20.283 - 20.382: 99.6976% ( 2) 00:07:10.269 20.382 - 20.480: 99.7035% ( 1) 00:07:10.269 20.578 - 20.677: 99.7094% ( 1) 00:07:10.269 20.677 - 20.775: 99.7154% ( 1) 00:07:10.269 20.775 - 20.874: 99.7213% ( 1) 00:07:10.269 20.874 - 20.972: 99.7272% ( 1) 00:07:10.269 21.071 - 21.169: 99.7391% ( 2) 00:07:10.269 21.268 - 21.366: 99.7450% ( 1) 00:07:10.269 21.465 - 21.563: 99.7509% ( 1) 00:07:10.269 21.563 - 21.662: 99.7569% ( 1) 00:07:10.269 21.760 - 21.858: 99.7687% ( 2) 00:07:10.269 21.858 - 21.957: 99.7747% ( 1) 00:07:10.269 22.055 - 22.154: 99.7806% ( 1) 00:07:10.269 22.449 - 22.548: 99.7924% ( 2) 00:07:10.269 22.548 - 22.646: 99.7984% ( 1) 00:07:10.269 22.745 - 22.843: 99.8043% ( 1) 00:07:10.269 23.040 - 23.138: 99.8102% ( 1) 00:07:10.269 23.138 - 23.237: 99.8221% ( 2) 00:07:10.269 23.237 - 23.335: 99.8280% ( 1) 00:07:10.269 23.335 - 23.434: 99.8458% ( 3) 00:07:10.269 23.434 - 23.532: 99.8517% ( 1) 00:07:10.269 23.532 - 23.631: 99.8577% ( 1) 00:07:10.269 23.631 - 23.729: 99.8636% ( 1) 00:07:10.269 24.418 - 24.517: 99.8695% ( 1) 00:07:10.269 24.517 - 24.615: 99.8755% ( 1) 00:07:10.269 24.615 - 24.714: 99.8814% ( 1) 00:07:10.269 24.714 - 24.812: 99.8873% ( 1) 00:07:10.269 25.797 - 25.994: 99.8933% ( 1) 00:07:10.269 27.372 - 27.569: 99.8992% ( 1) 00:07:10.269 27.569 - 27.766: 99.9110% ( 2) 00:07:10.269 28.160 - 28.357: 99.9229% ( 2) 00:07:10.269 28.948 - 29.145: 99.9288% ( 1) 00:07:10.269 29.735 - 29.932: 99.9348% ( 1) 00:07:10.269 35.840 - 36.037: 99.9407% ( 1) 00:07:10.269 36.234 - 36.431: 99.9466% ( 1) 00:07:10.269 37.218 - 37.415: 99.9526% ( 1) 00:07:10.269 40.172 - 40.369: 99.9585% ( 1) 00:07:10.269 41.945 - 42.142: 99.9644% ( 1) 00:07:10.269 42.929 - 43.126: 99.9703% ( 1) 00:07:10.269 50.412 - 50.806: 99.9763% ( 1) 00:07:10.269 51.200 - 51.594: 99.9822% ( 1) 00:07:10.269 54.745 - 55.138: 99.9881% ( 1) 00:07:10.269 58.289 - 58.683: 99.9941% ( 1) 00:07:10.269 70.892 - 71.286: 100.0000% ( 1) 00:07:10.269 00:07:10.269 Complete histogram 00:07:10.269 ================== 00:07:10.269 Range in us Cumulative Count 00:07:10.269 7.188 - 7.237: 0.0297% ( 5) 00:07:10.269 7.237 - 7.286: 0.3558% ( 55) 00:07:10.269 7.286 - 7.335: 2.7041% ( 396) 00:07:10.269 7.335 - 7.385: 11.0063% ( 1400) 00:07:10.269 7.385 - 7.434: 27.4981% ( 2781) 00:07:10.269 7.434 - 7.483: 46.9489% ( 3280) 00:07:10.269 7.483 - 7.532: 63.2094% ( 2742) 00:07:10.269 7.532 - 7.582: 75.0756% ( 2001) 00:07:10.269 7.582 - 7.631: 82.5357% ( 1258) 00:07:10.269 7.631 - 7.680: 86.9003% ( 736) 00:07:10.269 7.680 - 7.729: 89.3020% ( 405) 00:07:10.269 7.729 - 7.778: 90.5058% ( 203) 00:07:10.269 7.778 - 7.828: 91.2471% ( 125) 00:07:10.269 7.828 - 7.877: 91.8164% ( 96) 00:07:10.269 7.877 - 7.926: 92.2374% ( 71) 00:07:10.269 7.926 - 7.975: 92.5280% ( 49) 00:07:10.269 7.975 - 8.025: 92.8720% ( 58) 00:07:10.269 8.025 - 8.074: 93.2634% ( 66) 00:07:10.269 8.074 - 8.123: 93.6251% ( 61) 00:07:10.269 8.123 - 8.172: 93.9690% ( 58) 00:07:10.269 8.172 - 8.222: 94.3011% ( 56) 00:07:10.269 8.222 - 8.271: 94.5324% ( 39) 00:07:10.269 8.271 - 8.320: 94.7459% ( 36) 00:07:10.269 8.320 - 8.369: 94.9416% ( 33) 00:07:10.269 8.369 - 8.418: 95.1076% ( 28) 00:07:10.269 8.418 - 8.468: 95.2322% ( 21) 00:07:10.269 8.468 - 8.517: 95.3211% ( 15) 00:07:10.269 8.517 - 8.566: 95.3567% ( 6) 00:07:10.269 8.566 - 8.615: 95.4219% ( 11) 00:07:10.269 8.615 - 8.665: 95.4457% ( 4) 00:07:10.269 8.665 - 8.714: 95.4694% ( 4) 00:07:10.269 8.714 - 8.763: 95.4931% ( 4) 00:07:10.269 8.763 - 8.812: 95.5050% ( 2) 00:07:10.269 8.812 - 8.862: 95.5346% ( 5) 00:07:10.269 8.862 - 8.911: 95.5643% ( 5) 00:07:10.269 8.911 - 8.960: 95.6176% ( 9) 00:07:10.269 8.960 - 9.009: 95.6591% ( 7) 00:07:10.269 9.009 - 9.058: 95.6947% ( 6) 00:07:10.269 9.058 - 9.108: 95.7955% ( 17) 00:07:10.269 9.108 - 9.157: 95.8786% ( 14) 00:07:10.269 9.157 - 9.206: 95.9912% ( 19) 00:07:10.269 9.206 - 9.255: 96.1217% ( 22) 00:07:10.269 9.255 - 9.305: 96.2462% ( 21) 00:07:10.269 9.305 - 9.354: 96.3470% ( 17) 00:07:10.269 9.354 - 9.403: 96.4597% ( 19) 00:07:10.269 9.403 - 9.452: 96.5546% ( 16) 00:07:10.269 9.452 - 9.502: 96.6791% ( 21) 00:07:10.269 9.502 - 9.551: 96.7740% ( 16) 00:07:10.269 9.551 - 9.600: 96.8096% ( 6) 00:07:10.269 9.600 - 9.649: 96.8926% ( 14) 00:07:10.269 9.649 - 9.698: 96.9400% ( 8) 00:07:10.269 9.698 - 9.748: 96.9875% ( 8) 00:07:10.269 9.748 - 9.797: 97.0409% ( 9) 00:07:10.269 9.797 - 9.846: 97.0883% ( 8) 00:07:10.269 9.846 - 9.895: 97.1357% ( 8) 00:07:10.269 9.895 - 9.945: 97.2247% ( 15) 00:07:10.269 9.945 - 9.994: 97.2662% ( 7) 00:07:10.269 9.994 - 10.043: 97.3196% ( 9) 00:07:10.269 10.043 - 10.092: 97.3492% ( 5) 00:07:10.269 10.092 - 10.142: 97.3967% ( 8) 00:07:10.269 10.142 - 10.191: 97.4619% ( 11) 00:07:10.269 10.191 - 10.240: 97.5034% ( 7) 00:07:10.269 10.240 - 10.289: 97.5627% ( 10) 00:07:10.269 10.289 - 10.338: 97.6102% ( 8) 00:07:10.269 10.338 - 10.388: 97.6517% ( 7) 00:07:10.269 10.388 - 10.437: 97.7110% ( 10) 00:07:10.269 10.437 - 10.486: 97.7643% ( 9) 00:07:10.269 10.486 - 10.535: 97.8118% ( 8) 00:07:10.269 10.535 - 10.585: 97.8889% ( 13) 00:07:10.269 10.585 - 10.634: 97.9363% ( 8) 00:07:10.269 10.634 - 10.683: 97.9778% ( 7) 00:07:10.269 10.683 - 10.732: 98.0075% ( 5) 00:07:10.269 10.732 - 10.782: 98.0371% ( 5) 00:07:10.269 10.782 - 10.831: 98.0964% ( 10) 00:07:10.269 10.831 - 10.880: 98.1201% ( 4) 00:07:10.269 10.880 - 10.929: 98.1557% ( 6) 00:07:10.269 10.929 - 10.978: 98.1676% ( 2) 00:07:10.269 10.978 - 11.028: 98.2032% ( 6) 00:07:10.269 11.028 - 11.077: 98.2091% ( 1) 00:07:10.269 11.077 - 11.126: 98.2387% ( 5) 00:07:10.269 11.126 - 11.175: 98.2625% ( 4) 00:07:10.269 11.175 - 11.225: 98.2743% ( 2) 00:07:10.269 11.225 - 11.274: 98.2921% ( 3) 00:07:10.269 11.274 - 11.323: 98.3099% ( 3) 00:07:10.269 11.323 - 11.372: 98.3218% ( 2) 00:07:10.269 11.372 - 11.422: 98.3277% ( 1) 00:07:10.269 11.471 - 11.520: 98.3396% ( 2) 00:07:10.269 11.520 - 11.569: 98.3455% ( 1) 00:07:10.269 11.569 - 11.618: 98.3633% ( 3) 00:07:10.269 11.717 - 11.766: 98.3751% ( 2) 00:07:10.269 11.766 - 11.815: 98.3811% ( 1) 00:07:10.269 11.815 - 11.865: 98.3870% ( 1) 00:07:10.269 11.914 - 11.963: 98.3929% ( 1) 00:07:10.269 12.111 - 12.160: 98.3989% ( 1) 00:07:10.269 12.455 - 12.505: 98.4048% ( 1) 00:07:10.269 12.702 - 12.800: 98.4107% ( 1) 00:07:10.269 12.800 - 12.898: 98.4226% ( 2) 00:07:10.269 12.898 - 12.997: 98.4997% ( 13) 00:07:10.269 12.997 - 13.095: 98.5115% ( 2) 00:07:10.269 13.095 - 13.194: 98.5293% ( 3) 00:07:10.269 13.194 - 13.292: 98.5886% ( 10) 00:07:10.269 13.292 - 13.391: 98.6420% ( 9) 00:07:10.269 13.391 - 13.489: 98.7309% ( 15) 00:07:10.269 13.489 - 13.588: 98.7843% ( 9) 00:07:10.269 13.588 - 13.686: 98.8555% ( 12) 00:07:10.269 13.686 - 13.785: 98.9326% ( 13) 00:07:10.269 13.785 - 13.883: 99.0156% ( 14) 00:07:10.269 13.883 - 13.982: 99.0690% ( 9) 00:07:10.269 13.982 - 14.080: 99.1816% ( 19) 00:07:10.269 14.080 - 14.178: 99.2232% ( 7) 00:07:10.269 14.178 - 14.277: 99.2765% ( 9) 00:07:10.269 14.277 - 14.375: 99.3121% ( 6) 00:07:10.269 14.375 - 14.474: 99.3418% ( 5) 00:07:10.269 14.474 - 14.572: 99.4070% ( 11) 00:07:10.269 14.572 - 14.671: 99.4366% ( 5) 00:07:10.270 14.671 - 14.769: 99.4900% ( 9) 00:07:10.270 14.769 - 14.868: 99.5434% ( 9) 00:07:10.270 14.868 - 14.966: 99.5552% ( 2) 00:07:10.270 14.966 - 15.065: 99.5671% ( 2) 00:07:10.270 15.065 - 15.163: 99.5908% ( 4) 00:07:10.270 15.163 - 15.262: 99.5968% ( 1) 00:07:10.270 15.262 - 15.360: 99.6383% ( 7) 00:07:10.270 15.360 - 15.458: 99.6442% ( 1) 00:07:10.270 15.458 - 15.557: 99.6561% ( 2) 00:07:10.270 15.557 - 15.655: 99.6620% ( 1) 00:07:10.270 15.754 - 15.852: 99.6798% ( 3) 00:07:10.270 15.852 - 15.951: 99.6857% ( 1) 00:07:10.270 16.148 - 16.246: 99.6916% ( 1) 00:07:10.270 16.246 - 16.345: 99.6976% ( 1) 00:07:10.270 16.345 - 16.443: 99.7035% ( 1) 00:07:10.270 16.443 - 16.542: 99.7154% ( 2) 00:07:10.270 16.640 - 16.738: 99.7213% ( 1) 00:07:10.270 16.738 - 16.837: 99.7331% ( 2) 00:07:10.270 16.935 - 17.034: 99.7509% ( 3) 00:07:10.270 17.034 - 17.132: 99.7569% ( 1) 00:07:10.270 17.132 - 17.231: 99.7687% ( 2) 00:07:10.270 17.428 - 17.526: 99.7747% ( 1) 00:07:10.270 17.526 - 17.625: 99.7924% ( 3) 00:07:10.270 17.723 - 17.822: 99.8043% ( 2) 00:07:10.270 17.822 - 17.920: 99.8162% ( 2) 00:07:10.270 18.117 - 18.215: 99.8280% ( 2) 00:07:10.270 18.215 - 18.314: 99.8399% ( 2) 00:07:10.270 18.314 - 18.412: 99.8458% ( 1) 00:07:10.270 18.511 - 18.609: 99.8517% ( 1) 00:07:10.270 18.905 - 19.003: 99.8636% ( 2) 00:07:10.270 19.692 - 19.791: 99.8755% ( 2) 00:07:10.270 19.791 - 19.889: 99.8873% ( 2) 00:07:10.270 20.283 - 20.382: 99.8933% ( 1) 00:07:10.270 20.578 - 20.677: 99.9051% ( 2) 00:07:10.270 21.169 - 21.268: 99.9110% ( 1) 00:07:10.270 21.563 - 21.662: 99.9170% ( 1) 00:07:10.270 21.760 - 21.858: 99.9229% ( 1) 00:07:10.270 22.351 - 22.449: 99.9288% ( 1) 00:07:10.270 22.449 - 22.548: 99.9348% ( 1) 00:07:10.270 22.843 - 22.942: 99.9407% ( 1) 00:07:10.270 23.040 - 23.138: 99.9466% ( 1) 00:07:10.270 28.948 - 29.145: 99.9526% ( 1) 00:07:10.270 29.342 - 29.538: 99.9585% ( 1) 00:07:10.270 30.523 - 30.720: 99.9644% ( 1) 00:07:10.270 33.280 - 33.477: 99.9703% ( 1) 00:07:10.270 35.840 - 36.037: 99.9763% ( 1) 00:07:10.270 50.215 - 50.412: 99.9822% ( 1) 00:07:10.270 51.988 - 52.382: 99.9881% ( 1) 00:07:10.270 76.012 - 76.406: 99.9941% ( 1) 00:07:10.270 428.505 - 431.655: 100.0000% ( 1) 00:07:10.270 00:07:10.270 00:07:10.270 real 0m1.228s 00:07:10.270 user 0m1.066s 00:07:10.270 sys 0m0.106s 00:07:10.270 02:55:04 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.270 02:55:04 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:10.270 ************************************ 00:07:10.270 END TEST nvme_overhead 00:07:10.270 ************************************ 00:07:10.270 02:55:04 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:10.270 02:55:04 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:10.270 02:55:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.270 02:55:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.270 ************************************ 00:07:10.270 START TEST nvme_arbitration 00:07:10.270 ************************************ 00:07:10.270 02:55:04 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:13.602 Initializing NVMe Controllers 00:07:13.602 Attached to 0000:00:10.0 00:07:13.602 Attached to 0000:00:11.0 00:07:13.602 Attached to 0000:00:13.0 00:07:13.602 Attached to 0000:00:12.0 00:07:13.602 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:13.602 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:13.602 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:13.602 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:13.602 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:13.602 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:13.602 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:13.602 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:13.602 Initialization complete. Launching workers. 00:07:13.602 Starting thread on core 1 with urgent priority queue 00:07:13.602 Starting thread on core 2 with urgent priority queue 00:07:13.602 Starting thread on core 3 with urgent priority queue 00:07:13.602 Starting thread on core 0 with urgent priority queue 00:07:13.602 QEMU NVMe Ctrl (12340 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:07:13.602 QEMU NVMe Ctrl (12342 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:07:13.602 QEMU NVMe Ctrl (12341 ) core 1: 981.33 IO/s 101.90 secs/100000 ios 00:07:13.602 QEMU NVMe Ctrl (12342 ) core 1: 981.33 IO/s 101.90 secs/100000 ios 00:07:13.602 QEMU NVMe Ctrl (12343 ) core 2: 938.67 IO/s 106.53 secs/100000 ios 00:07:13.602 QEMU NVMe Ctrl (12342 ) core 3: 1002.67 IO/s 99.73 secs/100000 ios 00:07:13.602 ======================================================== 00:07:13.602 00:07:13.602 00:07:13.602 real 0m3.297s 00:07:13.602 user 0m9.189s 00:07:13.602 sys 0m0.125s 00:07:13.602 02:55:07 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.602 02:55:07 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:13.602 ************************************ 00:07:13.602 END TEST nvme_arbitration 00:07:13.602 ************************************ 00:07:13.602 02:55:07 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:13.602 02:55:07 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:13.602 02:55:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.602 02:55:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:13.602 ************************************ 00:07:13.602 START TEST nvme_single_aen 00:07:13.602 ************************************ 00:07:13.602 02:55:07 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:13.860 Asynchronous Event Request test 00:07:13.860 Attached to 0000:00:10.0 00:07:13.860 Attached to 0000:00:11.0 00:07:13.860 Attached to 0000:00:13.0 00:07:13.860 Attached to 0000:00:12.0 00:07:13.860 Reset controller to setup AER completions for this process 00:07:13.860 Registering asynchronous event callbacks... 00:07:13.860 Getting orig temperature thresholds of all controllers 00:07:13.860 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:13.860 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:13.860 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:13.860 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:13.860 Setting all controllers temperature threshold low to trigger AER 00:07:13.860 Waiting for all controllers temperature threshold to be set lower 00:07:13.860 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:13.860 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:13.860 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:13.860 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:13.860 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:13.860 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:13.860 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:13.860 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:13.860 Waiting for all controllers to trigger AER and reset threshold 00:07:13.860 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:13.860 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:13.860 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:13.860 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:13.860 Cleaning up... 00:07:13.860 00:07:13.860 real 0m0.228s 00:07:13.860 user 0m0.079s 00:07:13.860 sys 0m0.097s 00:07:13.860 02:55:07 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.860 02:55:08 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:13.860 ************************************ 00:07:13.860 END TEST nvme_single_aen 00:07:13.860 ************************************ 00:07:13.860 02:55:08 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:13.860 02:55:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.860 02:55:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.860 02:55:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:13.860 ************************************ 00:07:13.860 START TEST nvme_doorbell_aers 00:07:13.860 ************************************ 00:07:13.860 02:55:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:13.860 02:55:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:13.860 02:55:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:13.860 02:55:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:13.860 02:55:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:13.860 02:55:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:13.860 02:55:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:13.860 02:55:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:13.860 02:55:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:13.860 02:55:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:13.860 02:55:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:13.860 02:55:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:13.860 02:55:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:13.860 02:55:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:14.132 [2024-12-10 02:55:08.359964] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:24.121 Executing: test_write_invalid_db 00:07:24.121 Waiting for AER completion... 00:07:24.121 Failure: test_write_invalid_db 00:07:24.121 00:07:24.121 Executing: test_invalid_db_write_overflow_sq 00:07:24.121 Waiting for AER completion... 00:07:24.121 Failure: test_invalid_db_write_overflow_sq 00:07:24.121 00:07:24.121 Executing: test_invalid_db_write_overflow_cq 00:07:24.121 Waiting for AER completion... 00:07:24.121 Failure: test_invalid_db_write_overflow_cq 00:07:24.121 00:07:24.121 02:55:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:24.121 02:55:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:24.121 [2024-12-10 02:55:18.389400] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:34.080 Executing: test_write_invalid_db 00:07:34.080 Waiting for AER completion... 00:07:34.080 Failure: test_write_invalid_db 00:07:34.080 00:07:34.080 Executing: test_invalid_db_write_overflow_sq 00:07:34.080 Waiting for AER completion... 00:07:34.080 Failure: test_invalid_db_write_overflow_sq 00:07:34.080 00:07:34.080 Executing: test_invalid_db_write_overflow_cq 00:07:34.080 Waiting for AER completion... 00:07:34.080 Failure: test_invalid_db_write_overflow_cq 00:07:34.080 00:07:34.080 02:55:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:34.080 02:55:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:34.080 [2024-12-10 02:55:28.423024] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:44.043 Executing: test_write_invalid_db 00:07:44.043 Waiting for AER completion... 00:07:44.043 Failure: test_write_invalid_db 00:07:44.043 00:07:44.043 Executing: test_invalid_db_write_overflow_sq 00:07:44.043 Waiting for AER completion... 00:07:44.043 Failure: test_invalid_db_write_overflow_sq 00:07:44.043 00:07:44.043 Executing: test_invalid_db_write_overflow_cq 00:07:44.043 Waiting for AER completion... 00:07:44.043 Failure: test_invalid_db_write_overflow_cq 00:07:44.043 00:07:44.043 02:55:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:44.043 02:55:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:07:44.300 [2024-12-10 02:55:38.445272] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:54.264 Executing: test_write_invalid_db 00:07:54.264 Waiting for AER completion... 00:07:54.264 Failure: test_write_invalid_db 00:07:54.264 00:07:54.264 Executing: test_invalid_db_write_overflow_sq 00:07:54.264 Waiting for AER completion... 00:07:54.264 Failure: test_invalid_db_write_overflow_sq 00:07:54.264 00:07:54.264 Executing: test_invalid_db_write_overflow_cq 00:07:54.264 Waiting for AER completion... 00:07:54.264 Failure: test_invalid_db_write_overflow_cq 00:07:54.264 00:07:54.264 00:07:54.264 real 0m40.210s 00:07:54.264 user 0m34.216s 00:07:54.264 sys 0m5.616s 00:07:54.264 02:55:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.264 02:55:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:07:54.264 ************************************ 00:07:54.264 END TEST nvme_doorbell_aers 00:07:54.264 ************************************ 00:07:54.264 02:55:48 nvme -- nvme/nvme.sh@97 -- # uname 00:07:54.264 02:55:48 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:07:54.264 02:55:48 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:54.264 02:55:48 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:54.264 02:55:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.264 02:55:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.264 ************************************ 00:07:54.264 START TEST nvme_multi_aen 00:07:54.264 ************************************ 00:07:54.264 02:55:48 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:54.264 [2024-12-10 02:55:48.490574] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:54.264 [2024-12-10 02:55:48.490636] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:54.264 [2024-12-10 02:55:48.490646] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:54.265 [2024-12-10 02:55:48.492149] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:54.265 [2024-12-10 02:55:48.492189] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:54.265 [2024-12-10 02:55:48.492198] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:54.265 [2024-12-10 02:55:48.493260] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:54.265 [2024-12-10 02:55:48.493288] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:54.265 [2024-12-10 02:55:48.493296] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:54.265 [2024-12-10 02:55:48.494257] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:54.265 [2024-12-10 02:55:48.494281] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:54.265 [2024-12-10 02:55:48.494289] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63194) is not found. Dropping the request. 00:07:54.265 Child process pid: 63720 00:07:54.522 [Child] Asynchronous Event Request test 00:07:54.522 [Child] Attached to 0000:00:10.0 00:07:54.522 [Child] Attached to 0000:00:11.0 00:07:54.522 [Child] Attached to 0000:00:13.0 00:07:54.522 [Child] Attached to 0000:00:12.0 00:07:54.522 [Child] Registering asynchronous event callbacks... 00:07:54.522 [Child] Getting orig temperature thresholds of all controllers 00:07:54.522 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.523 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.523 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.523 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.523 [Child] Waiting for all controllers to trigger AER and reset threshold 00:07:54.523 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.523 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.523 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.523 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.523 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.523 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.523 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.523 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.523 [Child] Cleaning up... 00:07:54.523 Asynchronous Event Request test 00:07:54.523 Attached to 0000:00:10.0 00:07:54.523 Attached to 0000:00:11.0 00:07:54.523 Attached to 0000:00:13.0 00:07:54.523 Attached to 0000:00:12.0 00:07:54.523 Reset controller to setup AER completions for this process 00:07:54.523 Registering asynchronous event callbacks... 00:07:54.523 Getting orig temperature thresholds of all controllers 00:07:54.523 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.523 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.523 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.523 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.523 Setting all controllers temperature threshold low to trigger AER 00:07:54.523 Waiting for all controllers temperature threshold to be set lower 00:07:54.523 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.523 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:54.523 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.523 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:54.523 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.523 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:54.523 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.523 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:54.523 Waiting for all controllers to trigger AER and reset threshold 00:07:54.523 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.523 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.523 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.523 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.523 Cleaning up... 00:07:54.523 00:07:54.523 real 0m0.436s 00:07:54.523 user 0m0.146s 00:07:54.523 sys 0m0.187s 00:07:54.523 02:55:48 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.523 02:55:48 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:07:54.523 ************************************ 00:07:54.523 END TEST nvme_multi_aen 00:07:54.523 ************************************ 00:07:54.523 02:55:48 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:54.523 02:55:48 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:54.523 02:55:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.523 02:55:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.523 ************************************ 00:07:54.523 START TEST nvme_startup 00:07:54.523 ************************************ 00:07:54.523 02:55:48 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:54.780 Initializing NVMe Controllers 00:07:54.780 Attached to 0000:00:10.0 00:07:54.780 Attached to 0000:00:11.0 00:07:54.780 Attached to 0000:00:13.0 00:07:54.780 Attached to 0000:00:12.0 00:07:54.780 Initialization complete. 00:07:54.780 Time used:166914.594 (us). 00:07:54.780 00:07:54.780 real 0m0.237s 00:07:54.780 user 0m0.079s 00:07:54.780 sys 0m0.112s 00:07:54.780 02:55:49 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.780 02:55:49 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:07:54.780 ************************************ 00:07:54.780 END TEST nvme_startup 00:07:54.780 ************************************ 00:07:54.780 02:55:49 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:07:54.780 02:55:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.780 02:55:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.780 02:55:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.780 ************************************ 00:07:54.780 START TEST nvme_multi_secondary 00:07:54.780 ************************************ 00:07:54.780 02:55:49 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:07:54.780 02:55:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63770 00:07:54.780 02:55:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:07:54.780 02:55:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63771 00:07:54.780 02:55:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:07:54.781 02:55:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:07:58.063 Initializing NVMe Controllers 00:07:58.063 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:58.063 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:58.063 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:58.063 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:58.063 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:07:58.064 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:07:58.064 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:07:58.064 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:07:58.064 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:07:58.064 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:07:58.064 Initialization complete. Launching workers. 00:07:58.064 ======================================================== 00:07:58.064 Latency(us) 00:07:58.064 Device Information : IOPS MiB/s Average min max 00:07:58.064 PCIE (0000:00:10.0) NSID 1 from core 2: 3150.35 12.31 5077.37 1137.68 12595.77 00:07:58.064 PCIE (0000:00:11.0) NSID 1 from core 2: 3150.35 12.31 5079.05 1055.29 12902.22 00:07:58.064 PCIE (0000:00:13.0) NSID 1 from core 2: 3150.35 12.31 5078.60 929.39 13000.99 00:07:58.064 PCIE (0000:00:12.0) NSID 1 from core 2: 3150.35 12.31 5078.57 924.16 13133.61 00:07:58.064 PCIE (0000:00:12.0) NSID 2 from core 2: 3150.35 12.31 5078.92 1108.92 12314.43 00:07:58.064 PCIE (0000:00:12.0) NSID 3 from core 2: 3150.35 12.31 5078.86 1097.65 12203.73 00:07:58.064 ======================================================== 00:07:58.064 Total : 18902.09 73.84 5078.56 924.16 13133.61 00:07:58.064 00:07:58.064 02:55:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63770 00:07:58.064 Initializing NVMe Controllers 00:07:58.064 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:58.064 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:58.064 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:58.064 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:58.064 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:07:58.064 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:07:58.064 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:07:58.064 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:07:58.064 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:07:58.064 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:07:58.064 Initialization complete. Launching workers. 00:07:58.064 ======================================================== 00:07:58.064 Latency(us) 00:07:58.064 Device Information : IOPS MiB/s Average min max 00:07:58.064 PCIE (0000:00:10.0) NSID 1 from core 1: 7761.55 30.32 2060.02 724.29 6388.23 00:07:58.064 PCIE (0000:00:11.0) NSID 1 from core 1: 7761.55 30.32 2061.03 734.28 6454.75 00:07:58.064 PCIE (0000:00:13.0) NSID 1 from core 1: 7761.55 30.32 2061.00 729.12 6602.33 00:07:58.064 PCIE (0000:00:12.0) NSID 1 from core 1: 7761.55 30.32 2060.97 725.53 6622.76 00:07:58.064 PCIE (0000:00:12.0) NSID 2 from core 1: 7761.55 30.32 2060.92 722.02 6442.43 00:07:58.064 PCIE (0000:00:12.0) NSID 3 from core 1: 7761.55 30.32 2060.97 726.75 7124.49 00:07:58.064 ======================================================== 00:07:58.064 Total : 46569.29 181.91 2060.82 722.02 7124.49 00:07:58.064 00:08:00.609 Initializing NVMe Controllers 00:08:00.609 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.609 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.609 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.609 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.609 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:00.609 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:00.609 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:00.609 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:00.609 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:00.609 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:00.609 Initialization complete. Launching workers. 00:08:00.609 ======================================================== 00:08:00.609 Latency(us) 00:08:00.609 Device Information : IOPS MiB/s Average min max 00:08:00.609 PCIE (0000:00:10.0) NSID 1 from core 0: 10944.17 42.75 1460.73 659.91 7762.59 00:08:00.609 PCIE (0000:00:11.0) NSID 1 from core 0: 10944.17 42.75 1461.60 660.09 7839.48 00:08:00.609 PCIE (0000:00:13.0) NSID 1 from core 0: 10944.17 42.75 1461.58 677.22 7682.60 00:08:00.609 PCIE (0000:00:12.0) NSID 1 from core 0: 10944.17 42.75 1461.56 686.81 7738.36 00:08:00.609 PCIE (0000:00:12.0) NSID 2 from core 0: 10944.17 42.75 1461.53 680.58 7749.90 00:08:00.609 PCIE (0000:00:12.0) NSID 3 from core 0: 10944.17 42.75 1461.50 680.07 7687.18 00:08:00.609 ======================================================== 00:08:00.609 Total : 65665.04 256.50 1461.42 659.91 7839.48 00:08:00.609 00:08:00.609 02:55:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63771 00:08:00.609 02:55:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63846 00:08:00.609 02:55:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:00.609 02:55:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63847 00:08:00.609 02:55:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:00.609 02:55:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:03.908 Initializing NVMe Controllers 00:08:03.908 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:03.908 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:03.908 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:03.908 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:03.908 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:03.908 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:03.908 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:03.908 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:03.908 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:03.908 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:03.908 Initialization complete. Launching workers. 00:08:03.908 ======================================================== 00:08:03.908 Latency(us) 00:08:03.908 Device Information : IOPS MiB/s Average min max 00:08:03.908 PCIE (0000:00:10.0) NSID 1 from core 0: 7866.15 30.73 2032.67 662.50 8732.52 00:08:03.908 PCIE (0000:00:11.0) NSID 1 from core 0: 7866.15 30.73 2033.68 689.68 8019.25 00:08:03.908 PCIE (0000:00:13.0) NSID 1 from core 0: 7866.15 30.73 2033.67 713.64 8373.51 00:08:03.908 PCIE (0000:00:12.0) NSID 1 from core 0: 7866.15 30.73 2033.64 708.65 8481.25 00:08:03.908 PCIE (0000:00:12.0) NSID 2 from core 0: 7866.15 30.73 2033.69 688.20 8873.46 00:08:03.908 PCIE (0000:00:12.0) NSID 3 from core 0: 7866.15 30.73 2033.73 672.73 9427.20 00:08:03.908 ======================================================== 00:08:03.908 Total : 47196.90 184.36 2033.51 662.50 9427.20 00:08:03.908 00:08:03.908 Initializing NVMe Controllers 00:08:03.908 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:03.908 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:03.908 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:03.908 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:03.908 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:03.908 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:03.908 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:03.908 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:03.908 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:03.908 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:03.908 Initialization complete. Launching workers. 00:08:03.908 ======================================================== 00:08:03.908 Latency(us) 00:08:03.908 Device Information : IOPS MiB/s Average min max 00:08:03.908 PCIE (0000:00:10.0) NSID 1 from core 1: 8004.94 31.27 1997.44 717.82 11509.03 00:08:03.908 PCIE (0000:00:11.0) NSID 1 from core 1: 8004.94 31.27 1998.38 739.92 10347.16 00:08:03.908 PCIE (0000:00:13.0) NSID 1 from core 1: 8004.94 31.27 1998.33 732.99 9286.22 00:08:03.908 PCIE (0000:00:12.0) NSID 1 from core 1: 8004.94 31.27 1998.29 740.46 11944.51 00:08:03.908 PCIE (0000:00:12.0) NSID 2 from core 1: 8004.94 31.27 1998.26 723.38 12746.56 00:08:03.908 PCIE (0000:00:12.0) NSID 3 from core 1: 8004.94 31.27 1998.29 732.50 11344.83 00:08:03.908 ======================================================== 00:08:03.908 Total : 48029.61 187.62 1998.17 717.82 12746.56 00:08:03.908 00:08:05.819 Initializing NVMe Controllers 00:08:05.819 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:05.819 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:05.819 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:05.819 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:05.819 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:05.819 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:05.819 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:05.819 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:05.819 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:05.819 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:05.819 Initialization complete. Launching workers. 00:08:05.819 ======================================================== 00:08:05.819 Latency(us) 00:08:05.819 Device Information : IOPS MiB/s Average min max 00:08:05.819 PCIE (0000:00:10.0) NSID 1 from core 2: 4425.10 17.29 3613.07 722.92 15803.09 00:08:05.819 PCIE (0000:00:11.0) NSID 1 from core 2: 4425.10 17.29 3615.03 717.97 15223.59 00:08:05.819 PCIE (0000:00:13.0) NSID 1 from core 2: 4425.10 17.29 3615.33 749.20 15543.76 00:08:05.819 PCIE (0000:00:12.0) NSID 1 from core 2: 4425.10 17.29 3615.09 742.82 20301.25 00:08:05.819 PCIE (0000:00:12.0) NSID 2 from core 2: 4425.10 17.29 3615.03 740.18 15584.01 00:08:05.819 PCIE (0000:00:12.0) NSID 3 from core 2: 4425.10 17.29 3612.07 741.45 14662.68 00:08:05.819 ======================================================== 00:08:05.819 Total : 26550.62 103.71 3614.27 717.97 20301.25 00:08:05.819 00:08:05.819 02:55:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63846 00:08:05.819 02:55:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63847 00:08:05.819 00:08:05.819 real 0m10.876s 00:08:05.819 user 0m18.405s 00:08:05.819 sys 0m0.599s 00:08:05.819 02:55:59 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.819 ************************************ 00:08:05.819 END TEST nvme_multi_secondary 00:08:05.819 ************************************ 00:08:05.819 02:55:59 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:05.819 02:55:59 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:05.819 02:55:59 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:05.819 02:55:59 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62803 ]] 00:08:05.819 02:55:59 nvme -- common/autotest_common.sh@1094 -- # kill 62803 00:08:05.819 02:55:59 nvme -- common/autotest_common.sh@1095 -- # wait 62803 00:08:05.820 [2024-12-10 02:55:59.964718] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 [2024-12-10 02:55:59.964794] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 [2024-12-10 02:55:59.964823] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 [2024-12-10 02:55:59.964843] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 [2024-12-10 02:55:59.967279] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 [2024-12-10 02:55:59.967334] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 [2024-12-10 02:55:59.967351] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 [2024-12-10 02:55:59.967369] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 [2024-12-10 02:55:59.969798] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 [2024-12-10 02:55:59.969851] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 [2024-12-10 02:55:59.969869] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 [2024-12-10 02:55:59.969888] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 [2024-12-10 02:55:59.972130] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 [2024-12-10 02:55:59.972166] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 [2024-12-10 02:55:59.972177] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 [2024-12-10 02:55:59.972188] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63719) is not found. Dropping the request. 00:08:05.820 02:56:00 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:05.820 02:56:00 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:05.820 02:56:00 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:05.820 02:56:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.820 02:56:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.820 02:56:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.820 ************************************ 00:08:05.820 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:05.820 ************************************ 00:08:05.820 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:05.820 * Looking for test storage... 00:08:05.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:05.820 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:05.820 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:05.820 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:06.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.081 --rc genhtml_branch_coverage=1 00:08:06.081 --rc genhtml_function_coverage=1 00:08:06.081 --rc genhtml_legend=1 00:08:06.081 --rc geninfo_all_blocks=1 00:08:06.081 --rc geninfo_unexecuted_blocks=1 00:08:06.081 00:08:06.081 ' 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:06.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.081 --rc genhtml_branch_coverage=1 00:08:06.081 --rc genhtml_function_coverage=1 00:08:06.081 --rc genhtml_legend=1 00:08:06.081 --rc geninfo_all_blocks=1 00:08:06.081 --rc geninfo_unexecuted_blocks=1 00:08:06.081 00:08:06.081 ' 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:06.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.081 --rc genhtml_branch_coverage=1 00:08:06.081 --rc genhtml_function_coverage=1 00:08:06.081 --rc genhtml_legend=1 00:08:06.081 --rc geninfo_all_blocks=1 00:08:06.081 --rc geninfo_unexecuted_blocks=1 00:08:06.081 00:08:06.081 ' 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:06.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.081 --rc genhtml_branch_coverage=1 00:08:06.081 --rc genhtml_function_coverage=1 00:08:06.081 --rc genhtml_legend=1 00:08:06.081 --rc geninfo_all_blocks=1 00:08:06.081 --rc geninfo_unexecuted_blocks=1 00:08:06.081 00:08:06.081 ' 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64005 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64005 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64005 ']' 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.081 02:56:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:06.081 [2024-12-10 02:56:00.377554] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:06.081 [2024-12-10 02:56:00.377675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64005 ] 00:08:06.342 [2024-12-10 02:56:00.545794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.342 [2024-12-10 02:56:00.649799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.342 [2024-12-10 02:56:00.650254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.342 [2024-12-10 02:56:00.650308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.342 [2024-12-10 02:56:00.650342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:07.283 nvme0n1 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_fr47q.txt 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:07.283 true 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733799361 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64037 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:07.283 02:56:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:09.194 [2024-12-10 02:56:03.433500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:09.194 [2024-12-10 02:56:03.434054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:09.194 [2024-12-10 02:56:03.434150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:09.194 [2024-12-10 02:56:03.434208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.194 [2024-12-10 02:56:03.435903] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64037 00:08:09.194 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64037 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64037 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_fr47q.txt 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:09.194 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_fr47q.txt 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64005 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64005 ']' 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64005 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64005 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.195 killing process with pid 64005 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64005' 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64005 00:08:09.195 02:56:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64005 00:08:10.580 02:56:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:10.580 02:56:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:10.580 00:08:10.580 real 0m4.822s 00:08:10.580 user 0m17.296s 00:08:10.580 sys 0m0.505s 00:08:10.580 02:56:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.580 02:56:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:10.580 ************************************ 00:08:10.580 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:10.580 ************************************ 00:08:10.842 02:56:04 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:10.842 02:56:04 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:10.842 02:56:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.842 02:56:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.842 02:56:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.842 ************************************ 00:08:10.842 START TEST nvme_fio 00:08:10.842 ************************************ 00:08:10.842 02:56:04 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:10.842 02:56:04 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:10.842 02:56:04 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:10.843 02:56:04 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:10.843 02:56:04 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:10.843 02:56:04 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:10.843 02:56:04 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:10.843 02:56:04 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:10.843 02:56:04 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:10.843 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:10.843 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:10.843 02:56:05 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:10.843 02:56:05 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:10.843 02:56:05 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:10.843 02:56:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:10.843 02:56:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:11.104 02:56:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:11.104 02:56:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:11.104 02:56:05 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:11.104 02:56:05 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:11.104 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:11.104 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:11.104 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:11.104 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:11.104 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:11.104 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:11.104 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:11.104 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:11.104 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:11.104 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:11.104 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:11.365 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:11.365 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:11.365 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:11.365 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:11.365 02:56:05 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:11.365 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:11.365 fio-3.35 00:08:11.365 Starting 1 thread 00:08:17.969 00:08:17.969 test: (groupid=0, jobs=1): err= 0: pid=64173: Tue Dec 10 02:56:11 2024 00:08:17.969 read: IOPS=22.8k, BW=89.0MiB/s (93.3MB/s)(178MiB/2001msec) 00:08:17.969 slat (usec): min=3, max=134, avg= 5.03, stdev= 2.13 00:08:17.969 clat (usec): min=656, max=7610, avg=2804.75, stdev=790.48 00:08:17.969 lat (usec): min=668, max=7623, avg=2809.78, stdev=791.65 00:08:17.969 clat percentiles (usec): 00:08:17.969 | 1.00th=[ 2073], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2409], 00:08:17.969 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2606], 00:08:17.969 | 70.00th=[ 2704], 80.00th=[ 2868], 90.00th=[ 3720], 95.00th=[ 4817], 00:08:17.969 | 99.00th=[ 6194], 99.50th=[ 6456], 99.90th=[ 7046], 99.95th=[ 7177], 00:08:17.969 | 99.99th=[ 7570] 00:08:17.969 bw ( KiB/s): min=78360, max=99112, per=98.84%, avg=90040.00, stdev=10618.97, samples=3 00:08:17.969 iops : min=19590, max=24778, avg=22510.00, stdev=2654.74, samples=3 00:08:17.969 write: IOPS=22.6k, BW=88.5MiB/s (92.8MB/s)(177MiB/2001msec); 0 zone resets 00:08:17.969 slat (usec): min=3, max=207, avg= 5.33, stdev= 2.46 00:08:17.969 clat (usec): min=576, max=7611, avg=2808.81, stdev=783.71 00:08:17.969 lat (usec): min=588, max=7624, avg=2814.14, stdev=784.90 00:08:17.969 clat percentiles (usec): 00:08:17.969 | 1.00th=[ 2073], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:08:17.969 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2606], 00:08:17.969 | 70.00th=[ 2704], 80.00th=[ 2900], 90.00th=[ 3752], 95.00th=[ 4817], 00:08:17.969 | 99.00th=[ 6128], 99.50th=[ 6390], 99.90th=[ 6980], 99.95th=[ 7111], 00:08:17.969 | 99.99th=[ 7373] 00:08:17.969 bw ( KiB/s): min=78208, max=100080, per=99.66%, avg=90266.67, stdev=11107.53, samples=3 00:08:17.969 iops : min=19552, max=25020, avg=22566.67, stdev=2776.88, samples=3 00:08:17.969 lat (usec) : 750=0.01%, 1000=0.01% 00:08:17.970 lat (msec) : 2=0.65%, 4=90.98%, 10=8.36% 00:08:17.970 cpu : usr=98.70%, sys=0.20%, ctx=28, majf=0, minf=607 00:08:17.970 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:17.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:17.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:17.970 issued rwts: total=45573,45311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:17.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:17.970 00:08:17.970 Run status group 0 (all jobs): 00:08:17.970 READ: bw=89.0MiB/s (93.3MB/s), 89.0MiB/s-89.0MiB/s (93.3MB/s-93.3MB/s), io=178MiB (187MB), run=2001-2001msec 00:08:17.970 WRITE: bw=88.5MiB/s (92.8MB/s), 88.5MiB/s-88.5MiB/s (92.8MB/s-92.8MB/s), io=177MiB (186MB), run=2001-2001msec 00:08:17.970 ----------------------------------------------------- 00:08:17.970 Suppressions used: 00:08:17.970 count bytes template 00:08:17.970 1 32 /usr/src/fio/parse.c 00:08:17.970 1 8 libtcmalloc_minimal.so 00:08:17.970 ----------------------------------------------------- 00:08:17.970 00:08:17.970 02:56:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:17.970 02:56:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:17.970 02:56:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:17.970 02:56:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:17.970 02:56:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:17.970 02:56:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:17.970 02:56:12 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:17.970 02:56:12 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:17.970 02:56:12 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:18.231 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:18.231 fio-3.35 00:08:18.231 Starting 1 thread 00:08:30.490 00:08:30.490 test: (groupid=0, jobs=1): err= 0: pid=64229: Tue Dec 10 02:56:23 2024 00:08:30.490 read: IOPS=24.8k, BW=97.0MiB/s (102MB/s)(194MiB/2001msec) 00:08:30.490 slat (usec): min=3, max=101, avg= 4.73, stdev= 1.90 00:08:30.490 clat (usec): min=253, max=8960, avg=2570.06, stdev=634.97 00:08:30.490 lat (usec): min=259, max=9019, avg=2574.79, stdev=636.07 00:08:30.490 clat percentiles (usec): 00:08:30.490 | 1.00th=[ 1844], 5.00th=[ 2073], 10.00th=[ 2147], 20.00th=[ 2278], 00:08:30.490 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:08:30.490 | 70.00th=[ 2540], 80.00th=[ 2606], 90.00th=[ 2933], 95.00th=[ 3687], 00:08:30.490 | 99.00th=[ 5735], 99.50th=[ 5997], 99.90th=[ 6849], 99.95th=[ 7439], 00:08:30.490 | 99.99th=[ 8848] 00:08:30.490 bw ( KiB/s): min=95656, max=101808, per=99.95%, avg=99309.33, stdev=3234.46, samples=3 00:08:30.490 iops : min=23914, max=25452, avg=24827.33, stdev=808.61, samples=3 00:08:30.490 write: IOPS=24.7k, BW=96.4MiB/s (101MB/s)(193MiB/2001msec); 0 zone resets 00:08:30.490 slat (nsec): min=3444, max=91790, avg=4998.47, stdev=1938.26 00:08:30.490 clat (usec): min=278, max=8892, avg=2577.37, stdev=648.65 00:08:30.490 lat (usec): min=283, max=8906, avg=2582.37, stdev=649.76 00:08:30.490 clat percentiles (usec): 00:08:30.490 | 1.00th=[ 1827], 5.00th=[ 2073], 10.00th=[ 2147], 20.00th=[ 2311], 00:08:30.490 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:08:30.490 | 70.00th=[ 2540], 80.00th=[ 2606], 90.00th=[ 2933], 95.00th=[ 3785], 00:08:30.490 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6849], 99.95th=[ 7570], 00:08:30.490 | 99.99th=[ 8717] 00:08:30.490 bw ( KiB/s): min=95112, max=102776, per=100.00%, avg=99394.67, stdev=3910.69, samples=3 00:08:30.490 iops : min=23778, max=25694, avg=24848.67, stdev=977.67, samples=3 00:08:30.490 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:08:30.490 lat (msec) : 2=2.70%, 4=92.89%, 10=4.36% 00:08:30.490 cpu : usr=99.10%, sys=0.15%, ctx=23, majf=0, minf=607 00:08:30.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:30.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:30.490 issued rwts: total=49706,49406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:30.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:30.490 00:08:30.490 Run status group 0 (all jobs): 00:08:30.490 READ: bw=97.0MiB/s (102MB/s), 97.0MiB/s-97.0MiB/s (102MB/s-102MB/s), io=194MiB (204MB), run=2001-2001msec 00:08:30.490 WRITE: bw=96.4MiB/s (101MB/s), 96.4MiB/s-96.4MiB/s (101MB/s-101MB/s), io=193MiB (202MB), run=2001-2001msec 00:08:30.491 ----------------------------------------------------- 00:08:30.491 Suppressions used: 00:08:30.491 count bytes template 00:08:30.491 1 32 /usr/src/fio/parse.c 00:08:30.491 1 8 libtcmalloc_minimal.so 00:08:30.491 ----------------------------------------------------- 00:08:30.491 00:08:30.491 02:56:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:30.491 02:56:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:30.491 02:56:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:30.491 02:56:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:30.491 02:56:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:30.491 02:56:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:30.491 02:56:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:30.491 02:56:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:30.491 02:56:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:30.491 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:30.491 fio-3.35 00:08:30.491 Starting 1 thread 00:08:37.065 00:08:37.065 test: (groupid=0, jobs=1): err= 0: pid=64297: Tue Dec 10 02:56:30 2024 00:08:37.065 read: IOPS=24.0k, BW=93.9MiB/s (98.4MB/s)(188MiB/2001msec) 00:08:37.065 slat (nsec): min=3350, max=97463, avg=4966.60, stdev=2219.21 00:08:37.065 clat (usec): min=219, max=10038, avg=2660.37, stdev=751.07 00:08:37.065 lat (usec): min=223, max=10101, avg=2665.34, stdev=752.35 00:08:37.065 clat percentiles (usec): 00:08:37.065 | 1.00th=[ 1663], 5.00th=[ 2057], 10.00th=[ 2180], 20.00th=[ 2343], 00:08:37.065 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:08:37.065 | 70.00th=[ 2540], 80.00th=[ 2737], 90.00th=[ 3359], 95.00th=[ 4424], 00:08:37.065 | 99.00th=[ 5735], 99.50th=[ 6063], 99.90th=[ 6718], 99.95th=[ 8291], 00:08:37.065 | 99.99th=[ 9896] 00:08:37.065 bw ( KiB/s): min=95776, max=99064, per=100.00%, avg=97914.67, stdev=1853.87, samples=3 00:08:37.065 iops : min=23944, max=24766, avg=24478.67, stdev=463.47, samples=3 00:08:37.065 write: IOPS=23.9k, BW=93.3MiB/s (97.8MB/s)(187MiB/2001msec); 0 zone resets 00:08:37.065 slat (nsec): min=3466, max=91478, avg=5255.96, stdev=2258.35 00:08:37.065 clat (usec): min=157, max=9971, avg=2658.26, stdev=746.20 00:08:37.065 lat (usec): min=161, max=9986, avg=2663.52, stdev=747.46 00:08:37.065 clat percentiles (usec): 00:08:37.065 | 1.00th=[ 1663], 5.00th=[ 2057], 10.00th=[ 2180], 20.00th=[ 2343], 00:08:37.065 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2507], 00:08:37.065 | 70.00th=[ 2573], 80.00th=[ 2737], 90.00th=[ 3326], 95.00th=[ 4359], 00:08:37.065 | 99.00th=[ 5735], 99.50th=[ 6063], 99.90th=[ 7111], 99.95th=[ 8356], 00:08:37.065 | 99.99th=[ 9634] 00:08:37.065 bw ( KiB/s): min=95576, max=99784, per=100.00%, avg=97904.00, stdev=2139.47, samples=3 00:08:37.065 iops : min=23894, max=24946, avg=24476.00, stdev=534.87, samples=3 00:08:37.065 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.04% 00:08:37.065 lat (msec) : 2=3.63%, 4=89.95%, 10=6.34%, 20=0.01% 00:08:37.065 cpu : usr=99.15%, sys=0.15%, ctx=5, majf=0, minf=607 00:08:37.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:37.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:37.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:37.065 issued rwts: total=48092,47801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:37.065 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:37.065 00:08:37.065 Run status group 0 (all jobs): 00:08:37.065 READ: bw=93.9MiB/s (98.4MB/s), 93.9MiB/s-93.9MiB/s (98.4MB/s-98.4MB/s), io=188MiB (197MB), run=2001-2001msec 00:08:37.065 WRITE: bw=93.3MiB/s (97.8MB/s), 93.3MiB/s-93.3MiB/s (97.8MB/s-97.8MB/s), io=187MiB (196MB), run=2001-2001msec 00:08:37.065 ----------------------------------------------------- 00:08:37.065 Suppressions used: 00:08:37.065 count bytes template 00:08:37.065 1 32 /usr/src/fio/parse.c 00:08:37.065 1 8 libtcmalloc_minimal.so 00:08:37.065 ----------------------------------------------------- 00:08:37.065 00:08:37.065 02:56:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:37.065 02:56:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:37.065 02:56:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:37.065 02:56:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:37.065 02:56:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:37.065 02:56:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:37.324 02:56:31 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:37.324 02:56:31 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:37.324 02:56:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:37.582 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:37.582 fio-3.35 00:08:37.582 Starting 1 thread 00:08:47.572 00:08:47.572 test: (groupid=0, jobs=1): err= 0: pid=64352: Tue Dec 10 02:56:40 2024 00:08:47.572 read: IOPS=23.2k, BW=90.6MiB/s (95.0MB/s)(181MiB/2001msec) 00:08:47.572 slat (nsec): min=4225, max=67680, avg=5095.61, stdev=2066.84 00:08:47.572 clat (usec): min=244, max=9244, avg=2755.18, stdev=811.62 00:08:47.572 lat (usec): min=249, max=9301, avg=2760.28, stdev=812.85 00:08:47.572 clat percentiles (usec): 00:08:47.572 | 1.00th=[ 1745], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:08:47.572 | 30.00th=[ 2442], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:08:47.572 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3916], 95.00th=[ 4555], 00:08:47.572 | 99.00th=[ 6128], 99.50th=[ 6980], 99.90th=[ 7767], 99.95th=[ 8029], 00:08:47.572 | 99.99th=[ 8979] 00:08:47.572 bw ( KiB/s): min=81976, max=94800, per=97.46%, avg=90413.33, stdev=7308.88, samples=3 00:08:47.572 iops : min=20494, max=23700, avg=22603.33, stdev=1827.22, samples=3 00:08:47.572 write: IOPS=23.1k, BW=90.1MiB/s (94.4MB/s)(180MiB/2001msec); 0 zone resets 00:08:47.572 slat (nsec): min=4362, max=83034, avg=5350.90, stdev=2040.70 00:08:47.572 clat (usec): min=202, max=9108, avg=2757.92, stdev=815.28 00:08:47.572 lat (usec): min=206, max=9121, avg=2763.27, stdev=816.49 00:08:47.572 clat percentiles (usec): 00:08:47.572 | 1.00th=[ 1745], 5.00th=[ 2311], 10.00th=[ 2343], 20.00th=[ 2409], 00:08:47.572 | 30.00th=[ 2442], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:08:47.572 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3949], 95.00th=[ 4555], 00:08:47.572 | 99.00th=[ 6194], 99.50th=[ 7111], 99.90th=[ 7767], 99.95th=[ 8029], 00:08:47.572 | 99.99th=[ 8848] 00:08:47.572 bw ( KiB/s): min=81680, max=95744, per=98.31%, avg=90658.67, stdev=7798.56, samples=3 00:08:47.572 iops : min=20420, max=23936, avg=22664.67, stdev=1949.64, samples=3 00:08:47.572 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:08:47.573 lat (msec) : 2=2.08%, 4=88.51%, 10=9.35% 00:08:47.573 cpu : usr=99.20%, sys=0.10%, ctx=3, majf=0, minf=605 00:08:47.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:47.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:47.573 issued rwts: total=46410,46130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:47.573 00:08:47.573 Run status group 0 (all jobs): 00:08:47.573 READ: bw=90.6MiB/s (95.0MB/s), 90.6MiB/s-90.6MiB/s (95.0MB/s-95.0MB/s), io=181MiB (190MB), run=2001-2001msec 00:08:47.573 WRITE: bw=90.1MiB/s (94.4MB/s), 90.1MiB/s-90.1MiB/s (94.4MB/s-94.4MB/s), io=180MiB (189MB), run=2001-2001msec 00:08:47.573 ----------------------------------------------------- 00:08:47.573 Suppressions used: 00:08:47.573 count bytes template 00:08:47.573 1 32 /usr/src/fio/parse.c 00:08:47.573 1 8 libtcmalloc_minimal.so 00:08:47.573 ----------------------------------------------------- 00:08:47.573 00:08:47.573 02:56:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:47.573 02:56:40 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:47.573 00:08:47.573 real 0m35.834s 00:08:47.573 user 0m21.359s 00:08:47.573 sys 0m26.454s 00:08:47.573 02:56:40 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.573 ************************************ 00:08:47.573 END TEST nvme_fio 00:08:47.573 ************************************ 00:08:47.573 02:56:40 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:47.573 00:08:47.573 real 1m45.296s 00:08:47.573 user 3m43.161s 00:08:47.573 sys 0m36.871s 00:08:47.573 02:56:40 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.573 02:56:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.573 ************************************ 00:08:47.573 END TEST nvme 00:08:47.573 ************************************ 00:08:47.573 02:56:40 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:47.573 02:56:40 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:47.573 02:56:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.573 02:56:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.573 02:56:40 -- common/autotest_common.sh@10 -- # set +x 00:08:47.573 ************************************ 00:08:47.573 START TEST nvme_scc 00:08:47.573 ************************************ 00:08:47.573 02:56:40 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:47.573 * Looking for test storage... 00:08:47.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:47.573 02:56:40 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:47.573 02:56:40 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:47.573 02:56:40 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:47.573 02:56:40 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:47.573 02:56:40 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.573 02:56:41 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:47.573 02:56:41 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:47.573 02:56:41 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.573 02:56:41 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:47.573 02:56:41 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.573 02:56:41 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.573 02:56:41 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.573 02:56:41 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:47.573 02:56:41 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.573 02:56:41 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:47.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.573 --rc genhtml_branch_coverage=1 00:08:47.573 --rc genhtml_function_coverage=1 00:08:47.573 --rc genhtml_legend=1 00:08:47.573 --rc geninfo_all_blocks=1 00:08:47.573 --rc geninfo_unexecuted_blocks=1 00:08:47.573 00:08:47.573 ' 00:08:47.573 02:56:41 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:47.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.573 --rc genhtml_branch_coverage=1 00:08:47.573 --rc genhtml_function_coverage=1 00:08:47.573 --rc genhtml_legend=1 00:08:47.573 --rc geninfo_all_blocks=1 00:08:47.573 --rc geninfo_unexecuted_blocks=1 00:08:47.573 00:08:47.573 ' 00:08:47.573 02:56:41 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:47.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.573 --rc genhtml_branch_coverage=1 00:08:47.573 --rc genhtml_function_coverage=1 00:08:47.573 --rc genhtml_legend=1 00:08:47.573 --rc geninfo_all_blocks=1 00:08:47.573 --rc geninfo_unexecuted_blocks=1 00:08:47.573 00:08:47.573 ' 00:08:47.573 02:56:41 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:47.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.573 --rc genhtml_branch_coverage=1 00:08:47.573 --rc genhtml_function_coverage=1 00:08:47.573 --rc genhtml_legend=1 00:08:47.573 --rc geninfo_all_blocks=1 00:08:47.573 --rc geninfo_unexecuted_blocks=1 00:08:47.573 00:08:47.573 ' 00:08:47.573 02:56:41 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:47.573 02:56:41 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:47.573 02:56:41 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:47.573 02:56:41 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:47.573 02:56:41 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:47.573 02:56:41 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.573 02:56:41 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.573 02:56:41 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.573 02:56:41 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.573 02:56:41 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.573 02:56:41 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.573 02:56:41 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.573 02:56:41 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:47.573 02:56:41 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.573 02:56:41 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:47.573 02:56:41 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:47.573 02:56:41 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:47.573 02:56:41 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:47.573 02:56:41 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:47.573 02:56:41 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:47.573 02:56:41 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:47.573 02:56:41 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:47.573 02:56:41 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:47.573 02:56:41 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.573 02:56:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:47.573 02:56:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:47.573 02:56:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:47.573 02:56:41 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:47.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:47.573 Waiting for block devices as requested 00:08:47.573 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:47.573 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:47.573 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:47.573 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:52.849 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:52.849 02:56:46 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:52.849 02:56:46 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:52.849 02:56:46 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:52.849 02:56:46 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.849 02:56:46 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:52.849 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:52.850 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:52.851 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:08:52.852 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.853 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.854 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:52.855 02:56:46 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:52.855 02:56:46 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:52.855 02:56:46 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.855 02:56:46 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.855 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.856 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:52.857 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:08:52.858 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:52.859 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.860 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:52.861 02:56:46 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:52.861 02:56:46 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:52.861 02:56:46 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.861 02:56:46 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:52.861 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.862 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.863 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:08:52.864 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:08:52.865 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.866 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.866 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.866 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.866 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.866 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.866 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.866 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.866 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.866 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.866 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.866 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.867 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.868 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.869 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:52.870 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.871 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.872 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:52.873 02:56:47 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:52.873 02:56:47 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:52.873 02:56:47 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.873 02:56:47 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.873 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.874 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.875 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:52.876 02:56:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:08:52.876 02:56:47 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:08:52.876 02:56:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:08:52.876 02:56:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:08:52.876 02:56:47 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:53.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.045 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:54.045 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:54.045 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:54.045 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:54.045 02:56:48 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:54.045 02:56:48 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:54.045 02:56:48 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.045 02:56:48 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:54.045 ************************************ 00:08:54.045 START TEST nvme_simple_copy 00:08:54.045 ************************************ 00:08:54.045 02:56:48 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:54.303 Initializing NVMe Controllers 00:08:54.303 Attaching to 0000:00:10.0 00:08:54.303 Controller supports SCC. Attached to 0000:00:10.0 00:08:54.303 Namespace ID: 1 size: 6GB 00:08:54.303 Initialization complete. 00:08:54.303 00:08:54.303 Controller QEMU NVMe Ctrl (12340 ) 00:08:54.303 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:08:54.303 Namespace Block Size:4096 00:08:54.303 Writing LBAs 0 to 63 with Random Data 00:08:54.303 Copied LBAs from 0 - 63 to the Destination LBA 256 00:08:54.303 LBAs matching Written Data: 64 00:08:54.303 00:08:54.303 real 0m0.280s 00:08:54.303 user 0m0.122s 00:08:54.303 sys 0m0.057s 00:08:54.303 02:56:48 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.303 ************************************ 00:08:54.303 END TEST nvme_simple_copy 00:08:54.303 ************************************ 00:08:54.303 02:56:48 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:08:54.303 ************************************ 00:08:54.303 END TEST nvme_scc 00:08:54.303 ************************************ 00:08:54.303 00:08:54.303 real 0m7.688s 00:08:54.303 user 0m1.103s 00:08:54.303 sys 0m1.257s 00:08:54.303 02:56:48 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.303 02:56:48 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:54.303 02:56:48 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:08:54.303 02:56:48 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:08:54.303 02:56:48 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:08:54.303 02:56:48 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:08:54.304 02:56:48 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:08:54.304 02:56:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.304 02:56:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.304 02:56:48 -- common/autotest_common.sh@10 -- # set +x 00:08:54.304 ************************************ 00:08:54.304 START TEST nvme_fdp 00:08:54.304 ************************************ 00:08:54.304 02:56:48 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:08:54.304 * Looking for test storage... 00:08:54.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:54.304 02:56:48 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:54.304 02:56:48 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:08:54.304 02:56:48 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:54.562 02:56:48 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:08:54.562 02:56:48 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.562 02:56:48 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:54.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.562 --rc genhtml_branch_coverage=1 00:08:54.562 --rc genhtml_function_coverage=1 00:08:54.562 --rc genhtml_legend=1 00:08:54.562 --rc geninfo_all_blocks=1 00:08:54.562 --rc geninfo_unexecuted_blocks=1 00:08:54.562 00:08:54.562 ' 00:08:54.562 02:56:48 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:54.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.562 --rc genhtml_branch_coverage=1 00:08:54.562 --rc genhtml_function_coverage=1 00:08:54.562 --rc genhtml_legend=1 00:08:54.562 --rc geninfo_all_blocks=1 00:08:54.562 --rc geninfo_unexecuted_blocks=1 00:08:54.562 00:08:54.562 ' 00:08:54.562 02:56:48 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:54.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.562 --rc genhtml_branch_coverage=1 00:08:54.562 --rc genhtml_function_coverage=1 00:08:54.562 --rc genhtml_legend=1 00:08:54.562 --rc geninfo_all_blocks=1 00:08:54.562 --rc geninfo_unexecuted_blocks=1 00:08:54.562 00:08:54.562 ' 00:08:54.562 02:56:48 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:54.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.562 --rc genhtml_branch_coverage=1 00:08:54.562 --rc genhtml_function_coverage=1 00:08:54.562 --rc genhtml_legend=1 00:08:54.562 --rc geninfo_all_blocks=1 00:08:54.562 --rc geninfo_unexecuted_blocks=1 00:08:54.562 00:08:54.562 ' 00:08:54.562 02:56:48 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:54.562 02:56:48 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:54.562 02:56:48 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:54.562 02:56:48 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:54.562 02:56:48 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.562 02:56:48 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.563 02:56:48 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.563 02:56:48 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.563 02:56:48 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.563 02:56:48 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:08:54.563 02:56:48 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.563 02:56:48 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:08:54.563 02:56:48 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:54.563 02:56:48 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:08:54.563 02:56:48 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:54.563 02:56:48 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:08:54.563 02:56:48 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:54.563 02:56:48 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:54.563 02:56:48 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:54.563 02:56:48 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:08:54.563 02:56:48 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:54.563 02:56:48 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:54.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.821 Waiting for block devices as requested 00:08:55.079 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:55.079 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:55.079 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:55.079 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:00.389 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:00.389 02:56:54 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:00.389 02:56:54 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:00.389 02:56:54 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:00.389 02:56:54 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:00.389 02:56:54 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:00.389 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:00.390 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:00.391 02:56:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:00.392 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.393 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:00.394 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:00.395 02:56:54 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:00.395 02:56:54 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:00.395 02:56:54 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:00.395 02:56:54 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.395 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.396 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.397 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:00.398 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.399 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.400 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:00.401 02:56:54 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:00.401 02:56:54 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:00.401 02:56:54 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:00.401 02:56:54 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:00.401 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.402 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:00.403 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.404 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.405 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.406 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:00.407 02:56:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:00.670 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:00.671 02:56:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:00.672 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.673 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.674 02:56:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:00.675 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:00.676 02:56:54 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:00.676 02:56:54 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:00.676 02:56:54 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:00.676 02:56:54 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.676 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:00.677 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.678 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:00.679 02:56:54 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:00.679 02:56:54 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:00.680 02:56:54 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:00.680 02:56:54 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:00.680 02:56:54 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:00.680 02:56:54 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:01.244 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:01.501 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:01.501 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:01.501 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:01.501 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:01.759 02:56:55 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:01.759 02:56:55 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:01.759 02:56:55 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.759 02:56:55 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:01.759 ************************************ 00:09:01.759 START TEST nvme_flexible_data_placement 00:09:01.759 ************************************ 00:09:01.759 02:56:55 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:01.759 Initializing NVMe Controllers 00:09:01.759 Attaching to 0000:00:13.0 00:09:01.759 Controller supports FDP Attached to 0000:00:13.0 00:09:01.759 Namespace ID: 1 Endurance Group ID: 1 00:09:01.759 Initialization complete. 00:09:01.759 00:09:01.759 ================================== 00:09:01.759 == FDP tests for Namespace: #01 == 00:09:01.759 ================================== 00:09:01.759 00:09:01.759 Get Feature: FDP: 00:09:01.759 ================= 00:09:01.759 Enabled: Yes 00:09:01.759 FDP configuration Index: 0 00:09:01.759 00:09:01.759 FDP configurations log page 00:09:01.759 =========================== 00:09:01.759 Number of FDP configurations: 1 00:09:01.759 Version: 0 00:09:01.759 Size: 112 00:09:01.759 FDP Configuration Descriptor: 0 00:09:01.759 Descriptor Size: 96 00:09:01.759 Reclaim Group Identifier format: 2 00:09:01.759 FDP Volatile Write Cache: Not Present 00:09:01.759 FDP Configuration: Valid 00:09:01.759 Vendor Specific Size: 0 00:09:01.759 Number of Reclaim Groups: 2 00:09:01.759 Number of Recalim Unit Handles: 8 00:09:01.759 Max Placement Identifiers: 128 00:09:01.759 Number of Namespaces Suppprted: 256 00:09:01.759 Reclaim unit Nominal Size: 6000000 bytes 00:09:01.759 Estimated Reclaim Unit Time Limit: Not Reported 00:09:01.759 RUH Desc #000: RUH Type: Initially Isolated 00:09:01.759 RUH Desc #001: RUH Type: Initially Isolated 00:09:01.759 RUH Desc #002: RUH Type: Initially Isolated 00:09:01.759 RUH Desc #003: RUH Type: Initially Isolated 00:09:01.759 RUH Desc #004: RUH Type: Initially Isolated 00:09:01.759 RUH Desc #005: RUH Type: Initially Isolated 00:09:01.759 RUH Desc #006: RUH Type: Initially Isolated 00:09:01.759 RUH Desc #007: RUH Type: Initially Isolated 00:09:01.759 00:09:01.759 FDP reclaim unit handle usage log page 00:09:01.759 ====================================== 00:09:01.759 Number of Reclaim Unit Handles: 8 00:09:01.759 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:01.759 RUH Usage Desc #001: RUH Attributes: Unused 00:09:01.759 RUH Usage Desc #002: RUH Attributes: Unused 00:09:01.759 RUH Usage Desc #003: RUH Attributes: Unused 00:09:01.759 RUH Usage Desc #004: RUH Attributes: Unused 00:09:01.759 RUH Usage Desc #005: RUH Attributes: Unused 00:09:01.759 RUH Usage Desc #006: RUH Attributes: Unused 00:09:01.759 RUH Usage Desc #007: RUH Attributes: Unused 00:09:01.759 00:09:01.759 FDP statistics log page 00:09:01.759 ======================= 00:09:01.759 Host bytes with metadata written: 1040371712 00:09:01.759 Media bytes with metadata written: 1040482304 00:09:01.759 Media bytes erased: 0 00:09:01.759 00:09:01.759 FDP Reclaim unit handle status 00:09:01.759 ============================== 00:09:01.759 Number of RUHS descriptors: 2 00:09:01.759 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003fd3 00:09:01.759 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:01.759 00:09:01.759 FDP write on placement id: 0 success 00:09:01.759 00:09:01.759 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:01.759 00:09:01.759 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:01.759 00:09:01.759 Get Feature: FDP Events for Placement handle: #0 00:09:01.759 ======================== 00:09:01.759 Number of FDP Events: 6 00:09:01.759 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:01.759 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:01.759 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:01.759 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:01.759 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:01.759 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:01.759 00:09:01.759 FDP events log page 00:09:01.759 =================== 00:09:01.759 Number of FDP events: 1 00:09:01.759 FDP Event #0: 00:09:01.759 Event Type: RU Not Written to Capacity 00:09:01.759 Placement Identifier: Valid 00:09:01.759 NSID: Valid 00:09:01.759 Location: Valid 00:09:01.759 Placement Identifier: 0 00:09:01.759 Event Timestamp: 5 00:09:01.759 Namespace Identifier: 1 00:09:01.759 Reclaim Group Identifier: 0 00:09:01.759 Reclaim Unit Handle Identifier: 0 00:09:01.759 00:09:01.759 FDP test passed 00:09:02.018 00:09:02.018 real 0m0.233s 00:09:02.018 user 0m0.075s 00:09:02.018 sys 0m0.057s 00:09:02.018 02:56:56 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.018 02:56:56 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:02.018 ************************************ 00:09:02.018 END TEST nvme_flexible_data_placement 00:09:02.018 ************************************ 00:09:02.018 00:09:02.018 real 0m7.589s 00:09:02.018 user 0m1.103s 00:09:02.018 sys 0m1.369s 00:09:02.018 02:56:56 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.018 02:56:56 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:02.018 ************************************ 00:09:02.018 END TEST nvme_fdp 00:09:02.018 ************************************ 00:09:02.018 02:56:56 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:02.018 02:56:56 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:02.018 02:56:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.018 02:56:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.018 02:56:56 -- common/autotest_common.sh@10 -- # set +x 00:09:02.018 ************************************ 00:09:02.018 START TEST nvme_rpc 00:09:02.018 ************************************ 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:02.018 * Looking for test storage... 00:09:02.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.018 02:56:56 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:02.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.018 --rc genhtml_branch_coverage=1 00:09:02.018 --rc genhtml_function_coverage=1 00:09:02.018 --rc genhtml_legend=1 00:09:02.018 --rc geninfo_all_blocks=1 00:09:02.018 --rc geninfo_unexecuted_blocks=1 00:09:02.018 00:09:02.018 ' 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:02.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.018 --rc genhtml_branch_coverage=1 00:09:02.018 --rc genhtml_function_coverage=1 00:09:02.018 --rc genhtml_legend=1 00:09:02.018 --rc geninfo_all_blocks=1 00:09:02.018 --rc geninfo_unexecuted_blocks=1 00:09:02.018 00:09:02.018 ' 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:02.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.018 --rc genhtml_branch_coverage=1 00:09:02.018 --rc genhtml_function_coverage=1 00:09:02.018 --rc genhtml_legend=1 00:09:02.018 --rc geninfo_all_blocks=1 00:09:02.018 --rc geninfo_unexecuted_blocks=1 00:09:02.018 00:09:02.018 ' 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:02.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.018 --rc genhtml_branch_coverage=1 00:09:02.018 --rc genhtml_function_coverage=1 00:09:02.018 --rc genhtml_legend=1 00:09:02.018 --rc geninfo_all_blocks=1 00:09:02.018 --rc geninfo_unexecuted_blocks=1 00:09:02.018 00:09:02.018 ' 00:09:02.018 02:56:56 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:02.018 02:56:56 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:02.018 02:56:56 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:02.276 02:56:56 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:02.276 02:56:56 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:02.276 02:56:56 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:02.276 02:56:56 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:02.276 02:56:56 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65732 00:09:02.276 02:56:56 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:02.276 02:56:56 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:02.276 02:56:56 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65732 00:09:02.276 02:56:56 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65732 ']' 00:09:02.276 02:56:56 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.276 02:56:56 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.276 02:56:56 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.276 02:56:56 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.276 02:56:56 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.276 [2024-12-10 02:56:56.495842] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:02.276 [2024-12-10 02:56:56.495962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65732 ] 00:09:02.276 [2024-12-10 02:56:56.654007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:02.535 [2024-12-10 02:56:56.750234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.535 [2024-12-10 02:56:56.750310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.100 02:56:57 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.100 02:56:57 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:03.100 02:56:57 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:03.357 Nvme0n1 00:09:03.357 02:56:57 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:03.357 02:56:57 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:03.615 request: 00:09:03.615 { 00:09:03.615 "bdev_name": "Nvme0n1", 00:09:03.615 "filename": "non_existing_file", 00:09:03.615 "method": "bdev_nvme_apply_firmware", 00:09:03.615 "req_id": 1 00:09:03.615 } 00:09:03.615 Got JSON-RPC error response 00:09:03.615 response: 00:09:03.615 { 00:09:03.615 "code": -32603, 00:09:03.615 "message": "open file failed." 00:09:03.615 } 00:09:03.615 02:56:57 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:03.615 02:56:57 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:03.615 02:56:57 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:03.615 02:56:57 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:03.873 02:56:57 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65732 00:09:03.873 02:56:57 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65732 ']' 00:09:03.873 02:56:57 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65732 00:09:03.873 02:56:57 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:03.873 02:56:58 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.873 02:56:58 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65732 00:09:03.873 02:56:58 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.873 02:56:58 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.873 killing process with pid 65732 00:09:03.873 02:56:58 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65732' 00:09:03.873 02:56:58 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65732 00:09:03.873 02:56:58 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65732 00:09:05.247 00:09:05.247 real 0m3.200s 00:09:05.247 user 0m6.118s 00:09:05.247 sys 0m0.472s 00:09:05.247 02:56:59 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.247 02:56:59 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.247 ************************************ 00:09:05.247 END TEST nvme_rpc 00:09:05.247 ************************************ 00:09:05.247 02:56:59 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:05.247 02:56:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.247 02:56:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.247 02:56:59 -- common/autotest_common.sh@10 -- # set +x 00:09:05.247 ************************************ 00:09:05.247 START TEST nvme_rpc_timeouts 00:09:05.247 ************************************ 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:05.247 * Looking for test storage... 00:09:05.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.247 02:56:59 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:05.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.247 --rc genhtml_branch_coverage=1 00:09:05.247 --rc genhtml_function_coverage=1 00:09:05.247 --rc genhtml_legend=1 00:09:05.247 --rc geninfo_all_blocks=1 00:09:05.247 --rc geninfo_unexecuted_blocks=1 00:09:05.247 00:09:05.247 ' 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:05.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.247 --rc genhtml_branch_coverage=1 00:09:05.247 --rc genhtml_function_coverage=1 00:09:05.247 --rc genhtml_legend=1 00:09:05.247 --rc geninfo_all_blocks=1 00:09:05.247 --rc geninfo_unexecuted_blocks=1 00:09:05.247 00:09:05.247 ' 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:05.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.247 --rc genhtml_branch_coverage=1 00:09:05.247 --rc genhtml_function_coverage=1 00:09:05.247 --rc genhtml_legend=1 00:09:05.247 --rc geninfo_all_blocks=1 00:09:05.247 --rc geninfo_unexecuted_blocks=1 00:09:05.247 00:09:05.247 ' 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:05.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.247 --rc genhtml_branch_coverage=1 00:09:05.247 --rc genhtml_function_coverage=1 00:09:05.247 --rc genhtml_legend=1 00:09:05.247 --rc geninfo_all_blocks=1 00:09:05.247 --rc geninfo_unexecuted_blocks=1 00:09:05.247 00:09:05.247 ' 00:09:05.247 02:56:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.247 02:56:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65797 00:09:05.247 02:56:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65797 00:09:05.247 02:56:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65829 00:09:05.247 02:56:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:05.247 02:56:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65829 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65829 ']' 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.247 02:56:59 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:05.247 02:56:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:05.505 [2024-12-10 02:56:59.692318] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:05.505 [2024-12-10 02:56:59.692441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65829 ] 00:09:05.505 [2024-12-10 02:56:59.853653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:05.764 [2024-12-10 02:56:59.951562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.764 [2024-12-10 02:56:59.951714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.332 02:57:00 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.332 Checking default timeout settings: 00:09:06.332 02:57:00 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:06.332 02:57:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:06.332 02:57:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:06.590 Making settings changes with rpc: 00:09:06.590 02:57:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:06.590 02:57:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:06.848 Check default vs. modified settings: 00:09:06.848 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:06.848 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65797 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65797 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:07.106 Setting action_on_timeout is changed as expected. 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65797 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65797 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:07.106 Setting timeout_us is changed as expected. 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65797 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65797 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:07.106 Setting timeout_admin_us is changed as expected. 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65797 /tmp/settings_modified_65797 00:09:07.106 02:57:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65829 00:09:07.107 02:57:01 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65829 ']' 00:09:07.107 02:57:01 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65829 00:09:07.107 02:57:01 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:07.107 02:57:01 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.107 02:57:01 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65829 00:09:07.107 02:57:01 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.107 02:57:01 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.107 02:57:01 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65829' 00:09:07.107 killing process with pid 65829 00:09:07.107 02:57:01 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65829 00:09:07.107 02:57:01 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65829 00:09:09.004 RPC TIMEOUT SETTING TEST PASSED. 00:09:09.004 02:57:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:09.004 00:09:09.004 real 0m3.408s 00:09:09.004 user 0m6.639s 00:09:09.004 sys 0m0.479s 00:09:09.004 02:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.004 02:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:09.004 ************************************ 00:09:09.004 END TEST nvme_rpc_timeouts 00:09:09.004 ************************************ 00:09:09.004 02:57:02 -- spdk/autotest.sh@239 -- # uname -s 00:09:09.004 02:57:02 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:09.004 02:57:02 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:09.004 02:57:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.004 02:57:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.004 02:57:02 -- common/autotest_common.sh@10 -- # set +x 00:09:09.004 ************************************ 00:09:09.004 START TEST sw_hotplug 00:09:09.004 ************************************ 00:09:09.004 02:57:02 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:09.004 * Looking for test storage... 00:09:09.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:09.004 02:57:02 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:09.004 02:57:02 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:09.004 02:57:02 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:09.004 02:57:03 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.004 02:57:03 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:09.004 02:57:03 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.004 02:57:03 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:09.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.004 --rc genhtml_branch_coverage=1 00:09:09.004 --rc genhtml_function_coverage=1 00:09:09.004 --rc genhtml_legend=1 00:09:09.004 --rc geninfo_all_blocks=1 00:09:09.004 --rc geninfo_unexecuted_blocks=1 00:09:09.004 00:09:09.004 ' 00:09:09.004 02:57:03 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:09.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.004 --rc genhtml_branch_coverage=1 00:09:09.004 --rc genhtml_function_coverage=1 00:09:09.004 --rc genhtml_legend=1 00:09:09.004 --rc geninfo_all_blocks=1 00:09:09.004 --rc geninfo_unexecuted_blocks=1 00:09:09.004 00:09:09.004 ' 00:09:09.004 02:57:03 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:09.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.004 --rc genhtml_branch_coverage=1 00:09:09.004 --rc genhtml_function_coverage=1 00:09:09.004 --rc genhtml_legend=1 00:09:09.004 --rc geninfo_all_blocks=1 00:09:09.004 --rc geninfo_unexecuted_blocks=1 00:09:09.004 00:09:09.004 ' 00:09:09.004 02:57:03 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:09.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.004 --rc genhtml_branch_coverage=1 00:09:09.004 --rc genhtml_function_coverage=1 00:09:09.004 --rc genhtml_legend=1 00:09:09.004 --rc geninfo_all_blocks=1 00:09:09.004 --rc geninfo_unexecuted_blocks=1 00:09:09.004 00:09:09.004 ' 00:09:09.004 02:57:03 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:09.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:09.280 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:09.280 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:09.280 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:09.280 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:09.280 02:57:03 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:09.280 02:57:03 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:09.280 02:57:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:09.280 02:57:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:09.280 02:57:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:09.281 02:57:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:09.281 02:57:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:09.281 02:57:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:09.281 02:57:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:09.281 02:57:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:09.281 02:57:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:09.281 02:57:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:09.281 02:57:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:09.281 02:57:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:09.281 02:57:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:09.281 02:57:03 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:09.281 02:57:03 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:09.281 02:57:03 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:09.281 02:57:03 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:09.281 02:57:03 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:09.538 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:09.796 Waiting for block devices as requested 00:09:09.796 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.796 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.796 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.054 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:15.317 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:15.317 02:57:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:15.317 02:57:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:15.317 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:15.317 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:15.317 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:15.575 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:15.832 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:15.832 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:15.832 02:57:10 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:15.832 02:57:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:16.091 02:57:10 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:16.091 02:57:10 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:16.091 02:57:10 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66687 00:09:16.091 02:57:10 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:16.091 02:57:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:16.091 02:57:10 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:16.091 02:57:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:16.091 02:57:10 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:16.091 02:57:10 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:16.091 02:57:10 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:16.091 02:57:10 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:16.091 02:57:10 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:16.091 02:57:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:16.091 02:57:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:16.091 02:57:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:16.091 02:57:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:16.091 02:57:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:16.091 Initializing NVMe Controllers 00:09:16.091 Attaching to 0000:00:10.0 00:09:16.091 Attaching to 0000:00:11.0 00:09:16.091 Attached to 0000:00:10.0 00:09:16.091 Attached to 0000:00:11.0 00:09:16.091 Initialization complete. Starting I/O... 00:09:16.091 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:16.091 QEMU NVMe Ctrl (12341 ): 4 I/Os completed (+4) 00:09:16.091 00:09:17.463 QEMU NVMe Ctrl (12340 ): 2467 I/Os completed (+2467) 00:09:17.463 QEMU NVMe Ctrl (12341 ): 2451 I/Os completed (+2447) 00:09:17.463 00:09:18.397 QEMU NVMe Ctrl (12340 ): 5962 I/Os completed (+3495) 00:09:18.397 QEMU NVMe Ctrl (12341 ): 5938 I/Os completed (+3487) 00:09:18.397 00:09:19.354 QEMU NVMe Ctrl (12340 ): 9591 I/Os completed (+3629) 00:09:19.354 QEMU NVMe Ctrl (12341 ): 9607 I/Os completed (+3669) 00:09:19.354 00:09:20.304 QEMU NVMe Ctrl (12340 ): 13210 I/Os completed (+3619) 00:09:20.304 QEMU NVMe Ctrl (12341 ): 13267 I/Os completed (+3660) 00:09:20.304 00:09:21.247 QEMU NVMe Ctrl (12340 ): 16866 I/Os completed (+3656) 00:09:21.247 QEMU NVMe Ctrl (12341 ): 16932 I/Os completed (+3665) 00:09:21.247 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:22.182 [2024-12-10 02:57:16.225572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:22.182 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:22.182 [2024-12-10 02:57:16.226746] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 [2024-12-10 02:57:16.226792] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 [2024-12-10 02:57:16.226810] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 [2024-12-10 02:57:16.226829] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:22.182 [2024-12-10 02:57:16.228696] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 [2024-12-10 02:57:16.228735] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 [2024-12-10 02:57:16.228749] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 [2024-12-10 02:57:16.228766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:22.182 [2024-12-10 02:57:16.248414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:22.182 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:22.182 [2024-12-10 02:57:16.249459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 [2024-12-10 02:57:16.249496] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 [2024-12-10 02:57:16.249518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 [2024-12-10 02:57:16.249535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:22.182 [2024-12-10 02:57:16.251157] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 [2024-12-10 02:57:16.251190] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 [2024-12-10 02:57:16.251206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 [2024-12-10 02:57:16.251219] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:22.182 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:22.182 EAL: Scan for (pci) bus failed. 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:22.182 Attaching to 0000:00:10.0 00:09:22.182 Attached to 0000:00:10.0 00:09:22.182 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:22.182 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:22.182 02:57:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:22.182 Attaching to 0000:00:11.0 00:09:22.182 Attached to 0000:00:11.0 00:09:23.116 QEMU NVMe Ctrl (12340 ): 3231 I/Os completed (+3231) 00:09:23.116 QEMU NVMe Ctrl (12341 ): 2979 I/Os completed (+2979) 00:09:23.116 00:09:24.052 QEMU NVMe Ctrl (12340 ): 6449 I/Os completed (+3218) 00:09:24.052 QEMU NVMe Ctrl (12341 ): 6180 I/Os completed (+3201) 00:09:24.052 00:09:25.425 QEMU NVMe Ctrl (12340 ): 9829 I/Os completed (+3380) 00:09:25.425 QEMU NVMe Ctrl (12341 ): 9576 I/Os completed (+3396) 00:09:25.425 00:09:26.359 QEMU NVMe Ctrl (12340 ): 13018 I/Os completed (+3189) 00:09:26.359 QEMU NVMe Ctrl (12341 ): 12897 I/Os completed (+3321) 00:09:26.359 00:09:27.292 QEMU NVMe Ctrl (12340 ): 16611 I/Os completed (+3593) 00:09:27.292 QEMU NVMe Ctrl (12341 ): 16503 I/Os completed (+3606) 00:09:27.292 00:09:28.226 QEMU NVMe Ctrl (12340 ): 19944 I/Os completed (+3333) 00:09:28.226 QEMU NVMe Ctrl (12341 ): 19923 I/Os completed (+3420) 00:09:28.226 00:09:29.159 QEMU NVMe Ctrl (12340 ): 23244 I/Os completed (+3300) 00:09:29.159 QEMU NVMe Ctrl (12341 ): 23239 I/Os completed (+3316) 00:09:29.159 00:09:30.093 QEMU NVMe Ctrl (12340 ): 26456 I/Os completed (+3212) 00:09:30.093 QEMU NVMe Ctrl (12341 ): 26633 I/Os completed (+3394) 00:09:30.093 00:09:31.463 QEMU NVMe Ctrl (12340 ): 29556 I/Os completed (+3100) 00:09:31.463 QEMU NVMe Ctrl (12341 ): 29661 I/Os completed (+3028) 00:09:31.463 00:09:32.395 QEMU NVMe Ctrl (12340 ): 33195 I/Os completed (+3639) 00:09:32.395 QEMU NVMe Ctrl (12341 ): 33262 I/Os completed (+3601) 00:09:32.395 00:09:33.327 QEMU NVMe Ctrl (12340 ): 36319 I/Os completed (+3124) 00:09:33.327 QEMU NVMe Ctrl (12341 ): 36498 I/Os completed (+3236) 00:09:33.327 00:09:34.260 QEMU NVMe Ctrl (12340 ): 39631 I/Os completed (+3312) 00:09:34.260 QEMU NVMe Ctrl (12341 ): 39807 I/Os completed (+3309) 00:09:34.260 00:09:34.260 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:34.260 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:34.260 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:34.260 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:34.260 [2024-12-10 02:57:28.498333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:34.260 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:34.260 [2024-12-10 02:57:28.499487] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 [2024-12-10 02:57:28.499534] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 [2024-12-10 02:57:28.499554] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 [2024-12-10 02:57:28.499571] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:34.260 [2024-12-10 02:57:28.501465] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 [2024-12-10 02:57:28.501506] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 [2024-12-10 02:57:28.501520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 [2024-12-10 02:57:28.501534] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:34.260 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:34.260 [2024-12-10 02:57:28.521194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:34.260 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:34.260 [2024-12-10 02:57:28.522248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 [2024-12-10 02:57:28.522284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 [2024-12-10 02:57:28.522304] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 [2024-12-10 02:57:28.522319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:34.260 [2024-12-10 02:57:28.523955] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 [2024-12-10 02:57:28.523988] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 [2024-12-10 02:57:28.524002] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 [2024-12-10 02:57:28.524017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.260 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:34.260 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:34.260 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:34.260 EAL: Scan for (pci) bus failed. 00:09:34.260 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:34.261 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:34.261 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:34.518 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:34.518 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:34.518 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:34.518 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:34.518 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:34.518 Attaching to 0000:00:10.0 00:09:34.518 Attached to 0000:00:10.0 00:09:34.518 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:34.518 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:34.518 02:57:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:34.518 Attaching to 0000:00:11.0 00:09:34.518 Attached to 0000:00:11.0 00:09:35.084 QEMU NVMe Ctrl (12340 ): 2368 I/Os completed (+2368) 00:09:35.084 QEMU NVMe Ctrl (12341 ): 2104 I/Os completed (+2104) 00:09:35.084 00:09:36.058 QEMU NVMe Ctrl (12340 ): 6035 I/Os completed (+3667) 00:09:36.058 QEMU NVMe Ctrl (12341 ): 5775 I/Os completed (+3671) 00:09:36.058 00:09:37.431 QEMU NVMe Ctrl (12340 ): 9706 I/Os completed (+3671) 00:09:37.431 QEMU NVMe Ctrl (12341 ): 9443 I/Os completed (+3668) 00:09:37.431 00:09:38.364 QEMU NVMe Ctrl (12340 ): 13364 I/Os completed (+3658) 00:09:38.364 QEMU NVMe Ctrl (12341 ): 13101 I/Os completed (+3658) 00:09:38.364 00:09:39.296 QEMU NVMe Ctrl (12340 ): 17038 I/Os completed (+3674) 00:09:39.296 QEMU NVMe Ctrl (12341 ): 16792 I/Os completed (+3691) 00:09:39.296 00:09:40.230 QEMU NVMe Ctrl (12340 ): 20701 I/Os completed (+3663) 00:09:40.230 QEMU NVMe Ctrl (12341 ): 20479 I/Os completed (+3687) 00:09:40.230 00:09:41.167 QEMU NVMe Ctrl (12340 ): 24400 I/Os completed (+3699) 00:09:41.167 QEMU NVMe Ctrl (12341 ): 24164 I/Os completed (+3685) 00:09:41.167 00:09:42.101 QEMU NVMe Ctrl (12340 ): 28101 I/Os completed (+3701) 00:09:42.101 QEMU NVMe Ctrl (12341 ): 27853 I/Os completed (+3689) 00:09:42.101 00:09:43.474 QEMU NVMe Ctrl (12340 ): 31329 I/Os completed (+3228) 00:09:43.474 QEMU NVMe Ctrl (12341 ): 31076 I/Os completed (+3223) 00:09:43.474 00:09:44.408 QEMU NVMe Ctrl (12340 ): 34557 I/Os completed (+3228) 00:09:44.408 QEMU NVMe Ctrl (12341 ): 34278 I/Os completed (+3202) 00:09:44.408 00:09:45.343 QEMU NVMe Ctrl (12340 ): 37706 I/Os completed (+3149) 00:09:45.343 QEMU NVMe Ctrl (12341 ): 37468 I/Os completed (+3190) 00:09:45.343 00:09:46.276 QEMU NVMe Ctrl (12340 ): 41316 I/Os completed (+3610) 00:09:46.276 QEMU NVMe Ctrl (12341 ): 41104 I/Os completed (+3636) 00:09:46.276 00:09:46.535 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:46.535 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:46.535 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:46.535 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:46.535 [2024-12-10 02:57:40.772987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:46.535 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:46.535 [2024-12-10 02:57:40.774042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 [2024-12-10 02:57:40.774160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 [2024-12-10 02:57:40.774190] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 [2024-12-10 02:57:40.774250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:46.535 [2024-12-10 02:57:40.776065] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 [2024-12-10 02:57:40.776169] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 [2024-12-10 02:57:40.776208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 [2024-12-10 02:57:40.776236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:46.535 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:46.535 [2024-12-10 02:57:40.793437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:46.535 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:46.535 [2024-12-10 02:57:40.794323] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 [2024-12-10 02:57:40.794458] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 [2024-12-10 02:57:40.794564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 [2024-12-10 02:57:40.794591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:46.535 [2024-12-10 02:57:40.796009] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 [2024-12-10 02:57:40.796041] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 [2024-12-10 02:57:40.796056] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 [2024-12-10 02:57:40.796068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.535 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:46.535 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:46.535 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:46.535 EAL: Scan for (pci) bus failed. 00:09:46.535 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:46.535 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:46.535 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:46.794 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:46.794 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:46.794 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:46.794 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:46.794 02:57:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:46.794 Attaching to 0000:00:10.0 00:09:46.794 Attached to 0000:00:10.0 00:09:46.794 02:57:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:46.794 02:57:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:46.794 02:57:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:46.794 Attaching to 0000:00:11.0 00:09:46.794 Attached to 0000:00:11.0 00:09:46.794 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:46.794 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:46.794 [2024-12-10 02:57:41.034068] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:09:58.990 02:57:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:58.991 02:57:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:58.991 02:57:53 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.81 00:09:58.991 02:57:53 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.81 00:09:58.991 02:57:53 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:09:58.991 02:57:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.81 00:09:58.991 02:57:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.81 2 00:09:58.991 remove_attach_helper took 42.81s to complete (handling 2 nvme drive(s)) 02:57:53 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:05.548 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66687 00:10:05.548 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66687) - No such process 00:10:05.548 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66687 00:10:05.548 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:05.548 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:05.548 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:05.548 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67230 00:10:05.548 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:05.548 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67230 00:10:05.548 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:05.548 02:57:59 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67230 ']' 00:10:05.548 02:57:59 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.548 02:57:59 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.548 02:57:59 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.548 02:57:59 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.548 02:57:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:05.548 [2024-12-10 02:57:59.114977] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:10:05.548 [2024-12-10 02:57:59.115096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67230 ] 00:10:05.548 [2024-12-10 02:57:59.277322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.548 [2024-12-10 02:57:59.375653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.805 02:57:59 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.805 02:57:59 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:05.805 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:05.805 02:57:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.805 02:57:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:05.805 02:57:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.805 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:05.805 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:05.805 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:05.805 02:57:59 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:05.805 02:57:59 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:05.805 02:57:59 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:05.805 02:57:59 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:05.805 02:57:59 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:05.805 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:05.805 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:05.805 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:05.805 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:05.805 02:57:59 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:12.364 02:58:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:12.364 02:58:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.364 02:58:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:12.364 02:58:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.364 02:58:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:12.364 02:58:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:12.364 [2024-12-10 02:58:06.066281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:12.364 [2024-12-10 02:58:06.067775] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.364 [2024-12-10 02:58:06.067810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.364 [2024-12-10 02:58:06.067823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.364 [2024-12-10 02:58:06.067841] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.364 [2024-12-10 02:58:06.067849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.364 [2024-12-10 02:58:06.067857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.364 [2024-12-10 02:58:06.067864] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.364 [2024-12-10 02:58:06.067872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.364 [2024-12-10 02:58:06.067878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.364 [2024-12-10 02:58:06.067890] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.364 [2024-12-10 02:58:06.067896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.364 [2024-12-10 02:58:06.067904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.364 [2024-12-10 02:58:06.466289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:12.364 [2024-12-10 02:58:06.467711] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.364 [2024-12-10 02:58:06.467744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.364 [2024-12-10 02:58:06.467756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.364 [2024-12-10 02:58:06.467772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.364 [2024-12-10 02:58:06.467781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.364 [2024-12-10 02:58:06.467789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.364 [2024-12-10 02:58:06.467799] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.364 [2024-12-10 02:58:06.467805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.364 [2024-12-10 02:58:06.467814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.364 [2024-12-10 02:58:06.467821] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.364 [2024-12-10 02:58:06.467829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.364 [2024-12-10 02:58:06.467836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:12.364 02:58:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.364 02:58:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:12.364 02:58:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.364 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:12.622 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:12.622 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.622 02:58:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:24.932 02:58:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.932 02:58:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:24.932 02:58:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.932 [2024-12-10 02:58:18.866510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:24.932 [2024-12-10 02:58:18.868012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.932 [2024-12-10 02:58:18.868053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.932 [2024-12-10 02:58:18.868065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.932 [2024-12-10 02:58:18.868084] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.932 [2024-12-10 02:58:18.868092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.932 [2024-12-10 02:58:18.868100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.932 [2024-12-10 02:58:18.868108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.932 [2024-12-10 02:58:18.868117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.932 [2024-12-10 02:58:18.868123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.932 [2024-12-10 02:58:18.868132] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.932 [2024-12-10 02:58:18.868138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.932 [2024-12-10 02:58:18.868146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:24.932 02:58:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.932 02:58:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:24.932 02:58:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:24.932 02:58:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:25.191 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:25.191 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:25.191 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:25.191 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:25.191 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:25.191 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:25.191 02:58:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.191 02:58:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:25.191 02:58:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.191 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:25.191 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:25.191 [2024-12-10 02:58:19.466511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:25.191 [2024-12-10 02:58:19.467887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.191 [2024-12-10 02:58:19.467923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.191 [2024-12-10 02:58:19.467936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.191 [2024-12-10 02:58:19.467952] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.191 [2024-12-10 02:58:19.467961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.191 [2024-12-10 02:58:19.467968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.191 [2024-12-10 02:58:19.467976] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.191 [2024-12-10 02:58:19.467982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.191 [2024-12-10 02:58:19.467990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.191 [2024-12-10 02:58:19.467997] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.191 [2024-12-10 02:58:19.468005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.191 [2024-12-10 02:58:19.468011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.757 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:25.757 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:25.757 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:25.757 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:25.757 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:25.757 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:25.757 02:58:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.757 02:58:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:25.757 02:58:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.757 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:25.757 02:58:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:25.757 02:58:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.757 02:58:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.757 02:58:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:25.757 02:58:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:26.015 02:58:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:26.015 02:58:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:26.015 02:58:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:26.015 02:58:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:26.015 02:58:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:26.015 02:58:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:26.015 02:58:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:38.270 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:38.270 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:38.270 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:38.270 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.270 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.270 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.270 02:58:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.270 02:58:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.270 02:58:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.270 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:38.270 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:38.270 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.270 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.270 [2024-12-10 02:58:32.266730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:38.270 [2024-12-10 02:58:32.268178] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.270 [2024-12-10 02:58:32.268212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.270 [2024-12-10 02:58:32.268223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.270 [2024-12-10 02:58:32.268240] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.270 [2024-12-10 02:58:32.268247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.270 [2024-12-10 02:58:32.268257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.271 [2024-12-10 02:58:32.268264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.271 [2024-12-10 02:58:32.268272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.271 [2024-12-10 02:58:32.268278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.271 [2024-12-10 02:58:32.268287] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.271 [2024-12-10 02:58:32.268293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.271 [2024-12-10 02:58:32.268301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.271 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.271 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.271 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:38.271 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.271 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.271 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.271 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.271 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.271 02:58:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.271 02:58:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.271 02:58:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.271 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:38.271 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:38.539 [2024-12-10 02:58:32.666733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:38.539 [2024-12-10 02:58:32.668069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.539 [2024-12-10 02:58:32.668107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.539 [2024-12-10 02:58:32.668119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.539 [2024-12-10 02:58:32.668135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.539 [2024-12-10 02:58:32.668145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.539 [2024-12-10 02:58:32.668152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.539 [2024-12-10 02:58:32.668160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.539 [2024-12-10 02:58:32.668167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.540 [2024-12-10 02:58:32.668177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.540 [2024-12-10 02:58:32.668184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.540 [2024-12-10 02:58:32.668192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.540 [2024-12-10 02:58:32.668199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.540 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:38.540 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.540 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.540 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.540 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.540 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.540 02:58:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.540 02:58:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.540 02:58:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.540 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:38.540 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:38.839 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:38.839 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:38.839 02:58:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:38.839 02:58:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:38.839 02:58:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.839 02:58:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:38.839 02:58:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:38.839 02:58:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:38.839 02:58:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:38.839 02:58:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.839 02:58:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:51.035 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:51.035 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:51.035 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:51.035 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.035 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.035 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.035 02:58:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.035 02:58:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.035 02:58:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.035 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:51.035 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:51.035 02:58:45 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.17 00:10:51.035 02:58:45 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.17 00:10:51.035 02:58:45 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:51.035 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.17 00:10:51.036 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.17 2 00:10:51.036 remove_attach_helper took 45.17s to complete (handling 2 nvme drive(s)) 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:10:51.036 02:58:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.036 02:58:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.036 02:58:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.036 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:51.036 02:58:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.036 02:58:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.036 02:58:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.036 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:10:51.036 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:51.036 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:51.036 02:58:45 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:51.036 02:58:45 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:51.036 02:58:45 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:51.036 02:58:45 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:51.036 02:58:45 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:51.036 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:51.036 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:51.036 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:51.036 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:51.036 02:58:45 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:57.593 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:57.593 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:57.593 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:57.593 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:57.593 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:57.593 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:57.593 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:57.593 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:57.593 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.593 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.593 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.593 02:58:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.593 02:58:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.593 02:58:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.593 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:57.593 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:57.593 [2024-12-10 02:58:51.262281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:57.593 [2024-12-10 02:58:51.263583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.593 [2024-12-10 02:58:51.263619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.593 [2024-12-10 02:58:51.263635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.593 [2024-12-10 02:58:51.263655] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.593 [2024-12-10 02:58:51.263665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.593 [2024-12-10 02:58:51.263673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.593 [2024-12-10 02:58:51.263683] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.593 [2024-12-10 02:58:51.263691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.593 [2024-12-10 02:58:51.263698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.593 [2024-12-10 02:58:51.263706] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.593 [2024-12-10 02:58:51.263713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.593 [2024-12-10 02:58:51.263728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.593 [2024-12-10 02:58:51.662276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:57.593 [2024-12-10 02:58:51.663479] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.594 [2024-12-10 02:58:51.663510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.594 [2024-12-10 02:58:51.663521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.594 [2024-12-10 02:58:51.663537] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.594 [2024-12-10 02:58:51.663546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.594 [2024-12-10 02:58:51.663553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.594 [2024-12-10 02:58:51.663563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.594 [2024-12-10 02:58:51.663570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.594 [2024-12-10 02:58:51.663578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.594 [2024-12-10 02:58:51.663585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.594 [2024-12-10 02:58:51.663593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.594 [2024-12-10 02:58:51.663599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.594 02:58:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.594 02:58:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.594 02:58:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:57.594 02:58:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:57.852 02:58:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:57.852 02:58:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:57.852 02:58:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.043 02:59:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.043 02:59:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.043 02:59:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.043 [2024-12-10 02:59:04.062497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:10.043 [2024-12-10 02:59:04.063732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.043 [2024-12-10 02:59:04.063853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.043 [2024-12-10 02:59:04.063921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.043 [2024-12-10 02:59:04.063997] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.043 [2024-12-10 02:59:04.064017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.043 [2024-12-10 02:59:04.064112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.043 [2024-12-10 02:59:04.064141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.043 [2024-12-10 02:59:04.064191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.043 [2024-12-10 02:59:04.064217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.043 [2024-12-10 02:59:04.064267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.043 [2024-12-10 02:59:04.064286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.043 [2024-12-10 02:59:04.064296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.043 02:59:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.043 02:59:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.043 02:59:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:10.043 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:10.302 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:10.302 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.302 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.302 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.302 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.302 02:59:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.302 02:59:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.302 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.302 02:59:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.302 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:10.302 02:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:10.560 [2024-12-10 02:59:04.762514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:10.560 [2024-12-10 02:59:04.763534] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.560 [2024-12-10 02:59:04.763681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.560 [2024-12-10 02:59:04.763699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.560 [2024-12-10 02:59:04.763716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.560 [2024-12-10 02:59:04.763727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.560 [2024-12-10 02:59:04.763735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.560 [2024-12-10 02:59:04.763743] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.560 [2024-12-10 02:59:04.763751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.560 [2024-12-10 02:59:04.763760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.560 [2024-12-10 02:59:04.763767] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.560 [2024-12-10 02:59:04.763775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.560 [2024-12-10 02:59:04.763781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.818 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:10.818 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.818 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.818 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.818 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.818 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.818 02:59:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.818 02:59:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.818 02:59:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.076 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:11.076 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:11.076 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.076 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.076 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:11.076 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:11.076 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.076 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.076 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.076 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:11.076 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:11.076 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.076 02:59:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.275 02:59:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.275 02:59:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.275 02:59:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.275 02:59:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.275 02:59:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.275 02:59:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:23.275 02:59:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:23.275 [2024-12-10 02:59:17.562720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:23.275 [2024-12-10 02:59:17.563729] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.275 [2024-12-10 02:59:17.563765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.275 [2024-12-10 02:59:17.563776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.275 [2024-12-10 02:59:17.563797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.275 [2024-12-10 02:59:17.563804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.275 [2024-12-10 02:59:17.563812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.275 [2024-12-10 02:59:17.563820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.275 [2024-12-10 02:59:17.563830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.275 [2024-12-10 02:59:17.563837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.275 [2024-12-10 02:59:17.563845] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.275 [2024-12-10 02:59:17.563851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.275 [2024-12-10 02:59:17.563860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.840 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:23.840 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.840 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.840 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.840 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.840 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.840 02:59:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.840 02:59:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.840 [2024-12-10 02:59:18.062725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:23.840 02:59:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.840 [2024-12-10 02:59:18.063718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.840 [2024-12-10 02:59:18.063748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.840 [2024-12-10 02:59:18.063760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.840 [2024-12-10 02:59:18.063776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.840 [2024-12-10 02:59:18.063784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.841 [2024-12-10 02:59:18.063792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.841 [2024-12-10 02:59:18.063803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.841 [2024-12-10 02:59:18.063809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.841 [2024-12-10 02:59:18.063817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.841 [2024-12-10 02:59:18.063824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.841 [2024-12-10 02:59:18.063835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.841 [2024-12-10 02:59:18.063842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.841 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:23.841 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.405 02:59:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.405 02:59:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.405 02:59:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.405 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:24.662 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:24.662 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.662 02:59:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:36.874 02:59:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:36.874 02:59:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:36.874 02:59:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:36.874 02:59:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.874 02:59:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.874 02:59:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.874 02:59:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:36.874 02:59:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.70 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.70 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:36.874 02:59:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.70 00:11:36.874 02:59:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.70 2 00:11:36.874 remove_attach_helper took 45.70s to complete (handling 2 nvme drive(s)) 02:59:30 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:36.874 02:59:30 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67230 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67230 ']' 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67230 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67230 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67230' 00:11:36.874 killing process with pid 67230 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67230 00:11:36.874 02:59:30 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67230 00:11:37.818 02:59:32 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:38.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:38.651 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:38.651 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:38.651 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:38.651 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:38.651 00:11:38.651 real 2m30.083s 00:11:38.651 user 1m52.432s 00:11:38.651 sys 0m16.357s 00:11:38.651 ************************************ 00:11:38.651 END TEST sw_hotplug 00:11:38.651 ************************************ 00:11:38.651 02:59:32 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.651 02:59:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.915 02:59:33 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:38.915 02:59:33 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:38.915 02:59:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:38.915 02:59:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.915 02:59:33 -- common/autotest_common.sh@10 -- # set +x 00:11:38.915 ************************************ 00:11:38.915 START TEST nvme_xnvme 00:11:38.915 ************************************ 00:11:38.915 02:59:33 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:38.915 * Looking for test storage... 00:11:38.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:38.915 02:59:33 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:38.915 02:59:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:38.915 02:59:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:38.915 02:59:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.915 02:59:33 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:38.915 02:59:33 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.915 02:59:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:38.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.915 --rc genhtml_branch_coverage=1 00:11:38.915 --rc genhtml_function_coverage=1 00:11:38.915 --rc genhtml_legend=1 00:11:38.915 --rc geninfo_all_blocks=1 00:11:38.915 --rc geninfo_unexecuted_blocks=1 00:11:38.915 00:11:38.915 ' 00:11:38.915 02:59:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:38.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.915 --rc genhtml_branch_coverage=1 00:11:38.915 --rc genhtml_function_coverage=1 00:11:38.915 --rc genhtml_legend=1 00:11:38.915 --rc geninfo_all_blocks=1 00:11:38.915 --rc geninfo_unexecuted_blocks=1 00:11:38.915 00:11:38.915 ' 00:11:38.915 02:59:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:38.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.915 --rc genhtml_branch_coverage=1 00:11:38.915 --rc genhtml_function_coverage=1 00:11:38.915 --rc genhtml_legend=1 00:11:38.915 --rc geninfo_all_blocks=1 00:11:38.915 --rc geninfo_unexecuted_blocks=1 00:11:38.915 00:11:38.915 ' 00:11:38.915 02:59:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:38.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.915 --rc genhtml_branch_coverage=1 00:11:38.915 --rc genhtml_function_coverage=1 00:11:38.915 --rc genhtml_legend=1 00:11:38.916 --rc geninfo_all_blocks=1 00:11:38.916 --rc geninfo_unexecuted_blocks=1 00:11:38.916 00:11:38.916 ' 00:11:38.916 02:59:33 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:11:38.916 02:59:33 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:38.916 02:59:33 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:38.916 02:59:33 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:11:38.916 02:59:33 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:38.916 02:59:33 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:38.916 02:59:33 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:38.916 02:59:33 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:38.916 02:59:33 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:38.916 02:59:33 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:38.916 02:59:33 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:38.916 02:59:33 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:38.916 02:59:33 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:38.916 02:59:33 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:38.916 02:59:33 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:38.916 02:59:33 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:38.916 02:59:33 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:38.916 02:59:33 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:38.916 02:59:33 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:38.916 02:59:33 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:38.916 02:59:33 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:38.916 02:59:33 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:38.916 02:59:33 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:38.916 02:59:33 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:38.916 02:59:33 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:38.916 02:59:33 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:38.916 02:59:33 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:38.916 #define SPDK_CONFIG_H 00:11:38.916 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:38.916 #define SPDK_CONFIG_APPS 1 00:11:38.916 #define SPDK_CONFIG_ARCH native 00:11:38.916 #define SPDK_CONFIG_ASAN 1 00:11:38.916 #undef SPDK_CONFIG_AVAHI 00:11:38.916 #undef SPDK_CONFIG_CET 00:11:38.916 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:38.916 #define SPDK_CONFIG_COVERAGE 1 00:11:38.916 #define SPDK_CONFIG_CROSS_PREFIX 00:11:38.916 #undef SPDK_CONFIG_CRYPTO 00:11:38.916 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:38.916 #undef SPDK_CONFIG_CUSTOMOCF 00:11:38.916 #undef SPDK_CONFIG_DAOS 00:11:38.916 #define SPDK_CONFIG_DAOS_DIR 00:11:38.916 #define SPDK_CONFIG_DEBUG 1 00:11:38.916 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:38.916 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:38.916 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:38.916 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:38.916 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:38.916 #undef SPDK_CONFIG_DPDK_UADK 00:11:38.916 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:38.916 #define SPDK_CONFIG_EXAMPLES 1 00:11:38.916 #undef SPDK_CONFIG_FC 00:11:38.916 #define SPDK_CONFIG_FC_PATH 00:11:38.916 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:38.916 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:38.916 #define SPDK_CONFIG_FSDEV 1 00:11:38.916 #undef SPDK_CONFIG_FUSE 00:11:38.917 #undef SPDK_CONFIG_FUZZER 00:11:38.917 #define SPDK_CONFIG_FUZZER_LIB 00:11:38.917 #undef SPDK_CONFIG_GOLANG 00:11:38.917 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:38.917 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:38.917 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:38.917 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:38.917 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:38.917 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:38.917 #undef SPDK_CONFIG_HAVE_LZ4 00:11:38.917 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:38.917 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:38.917 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:38.917 #define SPDK_CONFIG_IDXD 1 00:11:38.917 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:38.917 #undef SPDK_CONFIG_IPSEC_MB 00:11:38.917 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:38.917 #define SPDK_CONFIG_ISAL 1 00:11:38.917 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:38.917 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:38.917 #define SPDK_CONFIG_LIBDIR 00:11:38.917 #undef SPDK_CONFIG_LTO 00:11:38.917 #define SPDK_CONFIG_MAX_LCORES 128 00:11:38.917 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:38.917 #define SPDK_CONFIG_NVME_CUSE 1 00:11:38.917 #undef SPDK_CONFIG_OCF 00:11:38.917 #define SPDK_CONFIG_OCF_PATH 00:11:38.917 #define SPDK_CONFIG_OPENSSL_PATH 00:11:38.917 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:38.917 #define SPDK_CONFIG_PGO_DIR 00:11:38.917 #undef SPDK_CONFIG_PGO_USE 00:11:38.917 #define SPDK_CONFIG_PREFIX /usr/local 00:11:38.917 #undef SPDK_CONFIG_RAID5F 00:11:38.917 #undef SPDK_CONFIG_RBD 00:11:38.917 #define SPDK_CONFIG_RDMA 1 00:11:38.917 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:38.917 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:38.917 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:38.917 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:38.917 #define SPDK_CONFIG_SHARED 1 00:11:38.917 #undef SPDK_CONFIG_SMA 00:11:38.917 #define SPDK_CONFIG_TESTS 1 00:11:38.917 #undef SPDK_CONFIG_TSAN 00:11:38.917 #define SPDK_CONFIG_UBLK 1 00:11:38.917 #define SPDK_CONFIG_UBSAN 1 00:11:38.917 #undef SPDK_CONFIG_UNIT_TESTS 00:11:38.917 #undef SPDK_CONFIG_URING 00:11:38.917 #define SPDK_CONFIG_URING_PATH 00:11:38.917 #undef SPDK_CONFIG_URING_ZNS 00:11:38.917 #undef SPDK_CONFIG_USDT 00:11:38.917 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:38.917 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:38.917 #undef SPDK_CONFIG_VFIO_USER 00:11:38.917 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:38.917 #define SPDK_CONFIG_VHOST 1 00:11:38.917 #define SPDK_CONFIG_VIRTIO 1 00:11:38.917 #undef SPDK_CONFIG_VTUNE 00:11:38.917 #define SPDK_CONFIG_VTUNE_DIR 00:11:38.917 #define SPDK_CONFIG_WERROR 1 00:11:38.917 #define SPDK_CONFIG_WPDK_DIR 00:11:38.917 #define SPDK_CONFIG_XNVME 1 00:11:38.917 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:38.917 02:59:33 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:38.917 02:59:33 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.917 02:59:33 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.917 02:59:33 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.917 02:59:33 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.917 02:59:33 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.917 02:59:33 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.917 02:59:33 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.917 02:59:33 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:38.917 02:59:33 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@68 -- # uname -s 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:38.917 02:59:33 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:38.917 02:59:33 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:38.918 02:59:33 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68605 ]] 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68605 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.CtmeFf 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.CtmeFf/tests/xnvme /tmp/spdk.CtmeFf 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975715840 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592268800 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:11:38.919 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:11:39.180 02:59:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:39.180 02:59:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:39.180 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:39.180 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:39.180 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260625408 00:11:39.180 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:11:39.180 02:59:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975715840 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592268800 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265237504 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=96497074176 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3205705728 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:39.181 * Looking for test storage... 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975715840 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:39.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.181 02:59:33 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:39.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.181 --rc genhtml_branch_coverage=1 00:11:39.181 --rc genhtml_function_coverage=1 00:11:39.181 --rc genhtml_legend=1 00:11:39.181 --rc geninfo_all_blocks=1 00:11:39.181 --rc geninfo_unexecuted_blocks=1 00:11:39.181 00:11:39.181 ' 00:11:39.181 02:59:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:39.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.181 --rc genhtml_branch_coverage=1 00:11:39.181 --rc genhtml_function_coverage=1 00:11:39.181 --rc genhtml_legend=1 00:11:39.181 --rc geninfo_all_blocks=1 00:11:39.181 --rc geninfo_unexecuted_blocks=1 00:11:39.181 00:11:39.181 ' 00:11:39.182 02:59:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:39.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.182 --rc genhtml_branch_coverage=1 00:11:39.182 --rc genhtml_function_coverage=1 00:11:39.182 --rc genhtml_legend=1 00:11:39.182 --rc geninfo_all_blocks=1 00:11:39.182 --rc geninfo_unexecuted_blocks=1 00:11:39.182 00:11:39.182 ' 00:11:39.182 02:59:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:39.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.182 --rc genhtml_branch_coverage=1 00:11:39.182 --rc genhtml_function_coverage=1 00:11:39.182 --rc genhtml_legend=1 00:11:39.182 --rc geninfo_all_blocks=1 00:11:39.182 --rc geninfo_unexecuted_blocks=1 00:11:39.182 00:11:39.182 ' 00:11:39.182 02:59:33 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:39.182 02:59:33 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.182 02:59:33 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.182 02:59:33 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.182 02:59:33 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.182 02:59:33 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.182 02:59:33 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.182 02:59:33 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.182 02:59:33 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:39.182 02:59:33 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:11:39.182 02:59:33 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:39.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:39.704 Waiting for block devices as requested 00:11:39.704 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.704 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.704 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.704 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:44.988 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:44.988 02:59:39 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:11:45.250 02:59:39 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:11:45.250 02:59:39 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:11:45.512 02:59:39 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:11:45.512 02:59:39 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:45.512 No valid GPT data, bailing 00:11:45.512 02:59:39 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:45.512 02:59:39 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:11:45.512 02:59:39 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:11:45.512 02:59:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:11:45.512 02:59:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:45.512 02:59:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.512 02:59:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:45.512 ************************************ 00:11:45.512 START TEST xnvme_rpc 00:11:45.512 ************************************ 00:11:45.512 02:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:11:45.512 02:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:11:45.512 02:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:11:45.512 02:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:11:45.512 02:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:11:45.512 02:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=68992 00:11:45.512 02:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 68992 00:11:45.512 02:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68992 ']' 00:11:45.512 02:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.512 02:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:45.512 02:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.512 02:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.512 02:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.512 02:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.512 [2024-12-10 02:59:39.863050] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:45.512 [2024-12-10 02:59:39.863166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68992 ] 00:11:45.773 [2024-12-10 02:59:40.021626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.773 [2024-12-10 02:59:40.125424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.738 xnvme_bdev 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:11:46.738 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 68992 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68992 ']' 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68992 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68992 00:11:46.739 killing process with pid 68992 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68992' 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 68992 00:11:46.739 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 68992 00:11:48.124 00:11:48.124 real 0m2.722s 00:11:48.124 user 0m2.758s 00:11:48.124 sys 0m0.393s 00:11:48.124 02:59:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.124 02:59:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.124 ************************************ 00:11:48.124 END TEST xnvme_rpc 00:11:48.124 ************************************ 00:11:48.383 02:59:42 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:48.383 02:59:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:48.383 02:59:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.383 02:59:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:48.383 ************************************ 00:11:48.383 START TEST xnvme_bdevperf 00:11:48.383 ************************************ 00:11:48.383 02:59:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:11:48.383 02:59:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:11:48.383 02:59:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:11:48.383 02:59:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:11:48.384 02:59:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:11:48.384 02:59:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:11:48.384 02:59:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:48.384 02:59:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:48.384 { 00:11:48.384 "subsystems": [ 00:11:48.384 { 00:11:48.384 "subsystem": "bdev", 00:11:48.384 "config": [ 00:11:48.384 { 00:11:48.384 "params": { 00:11:48.384 "io_mechanism": "libaio", 00:11:48.384 "conserve_cpu": false, 00:11:48.384 "filename": "/dev/nvme0n1", 00:11:48.384 "name": "xnvme_bdev" 00:11:48.384 }, 00:11:48.384 "method": "bdev_xnvme_create" 00:11:48.384 }, 00:11:48.384 { 00:11:48.384 "method": "bdev_wait_for_examine" 00:11:48.384 } 00:11:48.384 ] 00:11:48.384 } 00:11:48.384 ] 00:11:48.384 } 00:11:48.384 [2024-12-10 02:59:42.646880] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:48.384 [2024-12-10 02:59:42.647251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69061 ] 00:11:48.644 [2024-12-10 02:59:42.812840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.644 [2024-12-10 02:59:42.946622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.905 Running I/O for 5 seconds... 00:11:51.246 28647.00 IOPS, 111.90 MiB/s [2024-12-10T02:59:46.629Z] 27985.50 IOPS, 109.32 MiB/s [2024-12-10T02:59:47.573Z] 28006.33 IOPS, 109.40 MiB/s [2024-12-10T02:59:48.517Z] 27701.50 IOPS, 108.21 MiB/s 00:11:54.129 Latency(us) 00:11:54.129 [2024-12-10T02:59:48.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.129 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:54.129 xnvme_bdev : 5.00 27860.69 108.83 0.00 0.00 2292.70 494.67 8116.38 00:11:54.129 [2024-12-10T02:59:48.517Z] =================================================================================================================== 00:11:54.129 [2024-12-10T02:59:48.517Z] Total : 27860.69 108.83 0.00 0.00 2292.70 494.67 8116.38 00:11:55.072 02:59:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:11:55.072 02:59:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:11:55.072 02:59:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:11:55.072 02:59:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:55.072 02:59:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:55.072 { 00:11:55.072 "subsystems": [ 00:11:55.072 { 00:11:55.072 "subsystem": "bdev", 00:11:55.073 "config": [ 00:11:55.073 { 00:11:55.073 "params": { 00:11:55.073 "io_mechanism": "libaio", 00:11:55.073 "conserve_cpu": false, 00:11:55.073 "filename": "/dev/nvme0n1", 00:11:55.073 "name": "xnvme_bdev" 00:11:55.073 }, 00:11:55.073 "method": "bdev_xnvme_create" 00:11:55.073 }, 00:11:55.073 { 00:11:55.073 "method": "bdev_wait_for_examine" 00:11:55.073 } 00:11:55.073 ] 00:11:55.073 } 00:11:55.073 ] 00:11:55.073 } 00:11:55.073 [2024-12-10 02:59:49.164666] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:55.073 [2024-12-10 02:59:49.165029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69136 ] 00:11:55.073 [2024-12-10 02:59:49.330679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.334 [2024-12-10 02:59:49.464831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.595 Running I/O for 5 seconds... 00:11:57.480 35500.00 IOPS, 138.67 MiB/s [2024-12-10T02:59:52.814Z] 35294.50 IOPS, 137.87 MiB/s [2024-12-10T02:59:54.198Z] 35578.00 IOPS, 138.98 MiB/s [2024-12-10T02:59:55.142Z] 35409.50 IOPS, 138.32 MiB/s 00:12:00.754 Latency(us) 00:12:00.754 [2024-12-10T02:59:55.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.754 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:00.754 xnvme_bdev : 5.00 35155.23 137.33 0.00 0.00 1815.97 463.16 8822.15 00:12:00.754 [2024-12-10T02:59:55.142Z] =================================================================================================================== 00:12:00.754 [2024-12-10T02:59:55.142Z] Total : 35155.23 137.33 0.00 0.00 1815.97 463.16 8822.15 00:12:01.324 00:12:01.325 real 0m13.021s 00:12:01.325 user 0m6.166s 00:12:01.325 sys 0m5.536s 00:12:01.325 02:59:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.325 ************************************ 00:12:01.325 02:59:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:01.325 END TEST xnvme_bdevperf 00:12:01.325 ************************************ 00:12:01.325 02:59:55 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:01.325 02:59:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:01.325 02:59:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.325 02:59:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:01.325 ************************************ 00:12:01.325 START TEST xnvme_fio_plugin 00:12:01.325 ************************************ 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:01.325 02:59:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:01.325 { 00:12:01.325 "subsystems": [ 00:12:01.325 { 00:12:01.325 "subsystem": "bdev", 00:12:01.325 "config": [ 00:12:01.325 { 00:12:01.325 "params": { 00:12:01.325 "io_mechanism": "libaio", 00:12:01.325 "conserve_cpu": false, 00:12:01.325 "filename": "/dev/nvme0n1", 00:12:01.325 "name": "xnvme_bdev" 00:12:01.325 }, 00:12:01.325 "method": "bdev_xnvme_create" 00:12:01.325 }, 00:12:01.325 { 00:12:01.325 "method": "bdev_wait_for_examine" 00:12:01.325 } 00:12:01.325 ] 00:12:01.325 } 00:12:01.325 ] 00:12:01.325 } 00:12:01.586 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:01.586 fio-3.35 00:12:01.586 Starting 1 thread 00:12:08.186 00:12:08.186 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69255: Tue Dec 10 03:00:01 2024 00:12:08.186 read: IOPS=33.8k, BW=132MiB/s (138MB/s)(660MiB/5001msec) 00:12:08.186 slat (usec): min=4, max=2357, avg=16.97, stdev=85.12 00:12:08.186 clat (usec): min=102, max=5255, avg=1413.53, stdev=507.83 00:12:08.186 lat (usec): min=184, max=5259, avg=1430.50, stdev=499.94 00:12:08.186 clat percentiles (usec): 00:12:08.186 | 1.00th=[ 322], 5.00th=[ 619], 10.00th=[ 799], 20.00th=[ 1004], 00:12:08.186 | 30.00th=[ 1139], 40.00th=[ 1270], 50.00th=[ 1401], 60.00th=[ 1516], 00:12:08.186 | 70.00th=[ 1647], 80.00th=[ 1795], 90.00th=[ 2024], 95.00th=[ 2245], 00:12:08.186 | 99.00th=[ 2868], 99.50th=[ 3163], 99.90th=[ 3851], 99.95th=[ 4080], 00:12:08.186 | 99.99th=[ 4621] 00:12:08.186 bw ( KiB/s): min=123536, max=142361, per=100.00%, avg=135411.67, stdev=5830.21, samples=9 00:12:08.186 iops : min=30884, max=35590, avg=33852.89, stdev=1457.51, samples=9 00:12:08.186 lat (usec) : 250=0.41%, 500=2.57%, 750=5.18%, 1000=11.40% 00:12:08.186 lat (msec) : 2=69.81%, 4=10.56%, 10=0.06% 00:12:08.186 cpu : usr=55.96%, sys=36.66%, ctx=12, majf=0, minf=764 00:12:08.186 IO depths : 1=0.8%, 2=1.8%, 4=4.0%, 8=9.3%, 16=23.1%, 32=58.9%, >=64=2.0% 00:12:08.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.186 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:12:08.186 issued rwts: total=168841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.186 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:08.186 00:12:08.186 Run status group 0 (all jobs): 00:12:08.186 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=660MiB (692MB), run=5001-5001msec 00:12:08.449 ----------------------------------------------------- 00:12:08.449 Suppressions used: 00:12:08.449 count bytes template 00:12:08.449 1 11 /usr/src/fio/parse.c 00:12:08.449 1 8 libtcmalloc_minimal.so 00:12:08.449 1 904 libcrypto.so 00:12:08.449 ----------------------------------------------------- 00:12:08.449 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:08.449 03:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:08.449 { 00:12:08.449 "subsystems": [ 00:12:08.449 { 00:12:08.449 "subsystem": "bdev", 00:12:08.449 "config": [ 00:12:08.449 { 00:12:08.449 "params": { 00:12:08.449 "io_mechanism": "libaio", 00:12:08.449 "conserve_cpu": false, 00:12:08.449 "filename": "/dev/nvme0n1", 00:12:08.449 "name": "xnvme_bdev" 00:12:08.449 }, 00:12:08.449 "method": "bdev_xnvme_create" 00:12:08.449 }, 00:12:08.449 { 00:12:08.449 "method": "bdev_wait_for_examine" 00:12:08.449 } 00:12:08.449 ] 00:12:08.449 } 00:12:08.449 ] 00:12:08.449 } 00:12:08.449 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:08.449 fio-3.35 00:12:08.449 Starting 1 thread 00:12:15.042 00:12:15.042 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69347: Tue Dec 10 03:00:08 2024 00:12:15.042 write: IOPS=36.3k, BW=142MiB/s (149MB/s)(709MiB/5002msec); 0 zone resets 00:12:15.042 slat (usec): min=4, max=2659, avg=14.96, stdev=77.70 00:12:15.042 clat (usec): min=106, max=8322, avg=1343.77, stdev=477.97 00:12:15.042 lat (usec): min=213, max=8329, avg=1358.74, stdev=470.86 00:12:15.042 clat percentiles (usec): 00:12:15.042 | 1.00th=[ 343], 5.00th=[ 619], 10.00th=[ 783], 20.00th=[ 955], 00:12:15.042 | 30.00th=[ 1090], 40.00th=[ 1205], 50.00th=[ 1319], 60.00th=[ 1434], 00:12:15.042 | 70.00th=[ 1565], 80.00th=[ 1713], 90.00th=[ 1909], 95.00th=[ 2114], 00:12:15.042 | 99.00th=[ 2737], 99.50th=[ 2999], 99.90th=[ 3687], 99.95th=[ 4080], 00:12:15.042 | 99.99th=[ 5538] 00:12:15.042 bw ( KiB/s): min=134360, max=165304, per=99.64%, avg=144546.67, stdev=9108.25, samples=9 00:12:15.042 iops : min=33590, max=41326, avg=36136.67, stdev=2277.06, samples=9 00:12:15.042 lat (usec) : 250=0.38%, 500=2.35%, 750=6.12%, 1000=14.21% 00:12:15.042 lat (msec) : 2=69.66%, 4=7.23%, 10=0.05% 00:12:15.042 cpu : usr=59.47%, sys=32.95%, ctx=43, majf=0, minf=764 00:12:15.042 IO depths : 1=0.9%, 2=1.8%, 4=4.0%, 8=9.2%, 16=22.7%, 32=59.3%, >=64=2.1% 00:12:15.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.042 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:15.042 issued rwts: total=0,181406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.042 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:15.042 00:12:15.042 Run status group 0 (all jobs): 00:12:15.042 WRITE: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=709MiB (743MB), run=5002-5002msec 00:12:15.303 ----------------------------------------------------- 00:12:15.303 Suppressions used: 00:12:15.303 count bytes template 00:12:15.303 1 11 /usr/src/fio/parse.c 00:12:15.303 1 8 libtcmalloc_minimal.so 00:12:15.303 1 904 libcrypto.so 00:12:15.303 ----------------------------------------------------- 00:12:15.303 00:12:15.303 00:12:15.303 real 0m13.893s 00:12:15.303 user 0m8.622s 00:12:15.303 sys 0m4.118s 00:12:15.303 ************************************ 00:12:15.303 END TEST xnvme_fio_plugin 00:12:15.303 ************************************ 00:12:15.303 03:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.303 03:00:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:15.303 03:00:09 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:15.303 03:00:09 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:15.303 03:00:09 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:15.303 03:00:09 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:15.303 03:00:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:15.304 03:00:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.304 03:00:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:15.304 ************************************ 00:12:15.304 START TEST xnvme_rpc 00:12:15.304 ************************************ 00:12:15.304 03:00:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:15.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.304 03:00:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:15.304 03:00:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:15.304 03:00:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:15.304 03:00:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:15.304 03:00:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69434 00:12:15.304 03:00:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69434 00:12:15.304 03:00:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69434 ']' 00:12:15.304 03:00:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.304 03:00:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.304 03:00:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.304 03:00:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:15.304 03:00:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.304 03:00:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.565 [2024-12-10 03:00:09.704486] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:15.565 [2024-12-10 03:00:09.704842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69434 ] 00:12:15.565 [2024-12-10 03:00:09.861003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.825 [2024-12-10 03:00:09.984618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.397 xnvme_bdev 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.397 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69434 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69434 ']' 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69434 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69434 00:12:16.659 killing process with pid 69434 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69434' 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69434 00:12:16.659 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69434 00:12:18.580 ************************************ 00:12:18.580 END TEST xnvme_rpc 00:12:18.580 ************************************ 00:12:18.580 00:12:18.580 real 0m2.938s 00:12:18.580 user 0m2.908s 00:12:18.580 sys 0m0.497s 00:12:18.580 03:00:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.580 03:00:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.580 03:00:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:18.580 03:00:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:18.580 03:00:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.580 03:00:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.580 ************************************ 00:12:18.580 START TEST xnvme_bdevperf 00:12:18.580 ************************************ 00:12:18.580 03:00:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:18.580 03:00:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:18.580 03:00:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:18.580 03:00:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:18.580 03:00:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:18.580 03:00:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:18.580 03:00:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:18.580 03:00:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:18.580 { 00:12:18.580 "subsystems": [ 00:12:18.580 { 00:12:18.580 "subsystem": "bdev", 00:12:18.580 "config": [ 00:12:18.580 { 00:12:18.580 "params": { 00:12:18.580 "io_mechanism": "libaio", 00:12:18.580 "conserve_cpu": true, 00:12:18.580 "filename": "/dev/nvme0n1", 00:12:18.580 "name": "xnvme_bdev" 00:12:18.580 }, 00:12:18.580 "method": "bdev_xnvme_create" 00:12:18.580 }, 00:12:18.580 { 00:12:18.580 "method": "bdev_wait_for_examine" 00:12:18.580 } 00:12:18.580 ] 00:12:18.580 } 00:12:18.580 ] 00:12:18.580 } 00:12:18.580 [2024-12-10 03:00:12.692455] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:18.580 [2024-12-10 03:00:12.692596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69508 ] 00:12:18.580 [2024-12-10 03:00:12.858055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.842 [2024-12-10 03:00:12.980207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.103 Running I/O for 5 seconds... 00:12:20.992 29621.00 IOPS, 115.71 MiB/s [2024-12-10T03:00:16.326Z] 29573.00 IOPS, 115.52 MiB/s [2024-12-10T03:00:17.719Z] 29490.67 IOPS, 115.20 MiB/s [2024-12-10T03:00:18.664Z] 29419.75 IOPS, 114.92 MiB/s [2024-12-10T03:00:18.664Z] 29931.20 IOPS, 116.92 MiB/s 00:12:24.276 Latency(us) 00:12:24.276 [2024-12-10T03:00:18.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.276 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:24.276 xnvme_bdev : 5.00 29911.04 116.84 0.00 0.00 2134.49 444.26 6755.25 00:12:24.276 [2024-12-10T03:00:18.664Z] =================================================================================================================== 00:12:24.276 [2024-12-10T03:00:18.664Z] Total : 29911.04 116.84 0.00 0.00 2134.49 444.26 6755.25 00:12:24.848 03:00:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:24.848 03:00:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:24.848 03:00:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:24.848 03:00:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:24.848 03:00:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:24.848 { 00:12:24.848 "subsystems": [ 00:12:24.848 { 00:12:24.848 "subsystem": "bdev", 00:12:24.848 "config": [ 00:12:24.848 { 00:12:24.848 "params": { 00:12:24.848 "io_mechanism": "libaio", 00:12:24.848 "conserve_cpu": true, 00:12:24.848 "filename": "/dev/nvme0n1", 00:12:24.848 "name": "xnvme_bdev" 00:12:24.848 }, 00:12:24.848 "method": "bdev_xnvme_create" 00:12:24.848 }, 00:12:24.848 { 00:12:24.848 "method": "bdev_wait_for_examine" 00:12:24.848 } 00:12:24.848 ] 00:12:24.848 } 00:12:24.848 ] 00:12:24.848 } 00:12:24.848 [2024-12-10 03:00:19.185241] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:24.848 [2024-12-10 03:00:19.185415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69583 ] 00:12:25.110 [2024-12-10 03:00:19.348999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.110 [2024-12-10 03:00:19.475318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.681 Running I/O for 5 seconds... 00:12:27.614 30872.00 IOPS, 120.59 MiB/s [2024-12-10T03:00:22.947Z] 31320.50 IOPS, 122.35 MiB/s [2024-12-10T03:00:23.891Z] 31278.00 IOPS, 122.18 MiB/s [2024-12-10T03:00:24.837Z] 31989.50 IOPS, 124.96 MiB/s 00:12:30.449 Latency(us) 00:12:30.449 [2024-12-10T03:00:24.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.449 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:30.449 xnvme_bdev : 5.00 32456.15 126.78 0.00 0.00 1967.07 428.50 7360.20 00:12:30.449 [2024-12-10T03:00:24.837Z] =================================================================================================================== 00:12:30.449 [2024-12-10T03:00:24.837Z] Total : 32456.15 126.78 0.00 0.00 1967.07 428.50 7360.20 00:12:31.392 ************************************ 00:12:31.393 END TEST xnvme_bdevperf 00:12:31.393 ************************************ 00:12:31.393 00:12:31.393 real 0m12.968s 00:12:31.393 user 0m5.652s 00:12:31.393 sys 0m6.061s 00:12:31.393 03:00:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.393 03:00:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:31.393 03:00:25 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:31.393 03:00:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:31.393 03:00:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.393 03:00:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:31.393 ************************************ 00:12:31.393 START TEST xnvme_fio_plugin 00:12:31.393 ************************************ 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:31.393 03:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:31.393 { 00:12:31.393 "subsystems": [ 00:12:31.393 { 00:12:31.393 "subsystem": "bdev", 00:12:31.393 "config": [ 00:12:31.393 { 00:12:31.393 "params": { 00:12:31.393 "io_mechanism": "libaio", 00:12:31.393 "conserve_cpu": true, 00:12:31.393 "filename": "/dev/nvme0n1", 00:12:31.393 "name": "xnvme_bdev" 00:12:31.393 }, 00:12:31.393 "method": "bdev_xnvme_create" 00:12:31.393 }, 00:12:31.393 { 00:12:31.393 "method": "bdev_wait_for_examine" 00:12:31.393 } 00:12:31.393 ] 00:12:31.393 } 00:12:31.393 ] 00:12:31.393 } 00:12:31.654 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:31.654 fio-3.35 00:12:31.654 Starting 1 thread 00:12:38.246 00:12:38.246 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69703: Tue Dec 10 03:00:31 2024 00:12:38.246 read: IOPS=33.7k, BW=132MiB/s (138MB/s)(659MiB/5001msec) 00:12:38.246 slat (usec): min=4, max=3564, avg=16.03, stdev=87.11 00:12:38.246 clat (usec): min=106, max=4668, avg=1437.66, stdev=485.15 00:12:38.246 lat (usec): min=211, max=4755, avg=1453.69, stdev=476.56 00:12:38.246 clat percentiles (usec): 00:12:38.246 | 1.00th=[ 351], 5.00th=[ 725], 10.00th=[ 865], 20.00th=[ 1045], 00:12:38.246 | 30.00th=[ 1172], 40.00th=[ 1287], 50.00th=[ 1418], 60.00th=[ 1532], 00:12:38.246 | 70.00th=[ 1663], 80.00th=[ 1811], 90.00th=[ 2008], 95.00th=[ 2212], 00:12:38.246 | 99.00th=[ 2900], 99.50th=[ 3228], 99.90th=[ 3949], 99.95th=[ 4178], 00:12:38.246 | 99.99th=[ 4424] 00:12:38.246 bw ( KiB/s): min=132200, max=140088, per=100.00%, avg=135548.78, stdev=2886.40, samples=9 00:12:38.246 iops : min=33050, max=35022, avg=33887.11, stdev=721.65, samples=9 00:12:38.246 lat (usec) : 250=0.34%, 500=1.59%, 750=3.71%, 1000=11.54% 00:12:38.246 lat (msec) : 2=72.59%, 4=10.14%, 10=0.09% 00:12:38.246 cpu : usr=59.04%, sys=34.38%, ctx=14, majf=0, minf=764 00:12:38.246 IO depths : 1=0.9%, 2=1.9%, 4=4.1%, 8=9.2%, 16=22.7%, 32=59.3%, >=64=2.0% 00:12:38.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.246 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:38.246 issued rwts: total=168748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.246 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:38.246 00:12:38.246 Run status group 0 (all jobs): 00:12:38.246 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=659MiB (691MB), run=5001-5001msec 00:12:38.246 ----------------------------------------------------- 00:12:38.246 Suppressions used: 00:12:38.246 count bytes template 00:12:38.246 1 11 /usr/src/fio/parse.c 00:12:38.246 1 8 libtcmalloc_minimal.so 00:12:38.246 1 904 libcrypto.so 00:12:38.246 ----------------------------------------------------- 00:12:38.246 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:38.246 03:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:38.508 { 00:12:38.508 "subsystems": [ 00:12:38.508 { 00:12:38.508 "subsystem": "bdev", 00:12:38.508 "config": [ 00:12:38.508 { 00:12:38.508 "params": { 00:12:38.508 "io_mechanism": "libaio", 00:12:38.508 "conserve_cpu": true, 00:12:38.508 "filename": "/dev/nvme0n1", 00:12:38.508 "name": "xnvme_bdev" 00:12:38.508 }, 00:12:38.508 "method": "bdev_xnvme_create" 00:12:38.508 }, 00:12:38.508 { 00:12:38.508 "method": "bdev_wait_for_examine" 00:12:38.508 } 00:12:38.508 ] 00:12:38.508 } 00:12:38.508 ] 00:12:38.508 } 00:12:38.508 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:38.508 fio-3.35 00:12:38.508 Starting 1 thread 00:12:45.096 00:12:45.096 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69794: Tue Dec 10 03:00:38 2024 00:12:45.096 write: IOPS=34.7k, BW=135MiB/s (142MB/s)(677MiB/5001msec); 0 zone resets 00:12:45.096 slat (usec): min=4, max=2346, avg=16.57, stdev=83.90 00:12:45.096 clat (usec): min=106, max=5821, avg=1380.21, stdev=489.53 00:12:45.096 lat (usec): min=215, max=5826, avg=1396.78, stdev=482.07 00:12:45.096 clat percentiles (usec): 00:12:45.096 | 1.00th=[ 330], 5.00th=[ 635], 10.00th=[ 799], 20.00th=[ 979], 00:12:45.096 | 30.00th=[ 1123], 40.00th=[ 1237], 50.00th=[ 1369], 60.00th=[ 1483], 00:12:45.096 | 70.00th=[ 1598], 80.00th=[ 1745], 90.00th=[ 1958], 95.00th=[ 2147], 00:12:45.096 | 99.00th=[ 2835], 99.50th=[ 3228], 99.90th=[ 3818], 99.95th=[ 3982], 00:12:45.096 | 99.99th=[ 4555] 00:12:45.096 bw ( KiB/s): min=131320, max=145984, per=99.92%, avg=138531.56, stdev=4376.34, samples=9 00:12:45.096 iops : min=32830, max=36496, avg=34632.89, stdev=1094.08, samples=9 00:12:45.096 lat (usec) : 250=0.37%, 500=2.28%, 750=5.35%, 1000=13.67% 00:12:45.096 lat (msec) : 2=69.82%, 4=8.47%, 10=0.05% 00:12:45.096 cpu : usr=56.72%, sys=36.24%, ctx=13, majf=0, minf=764 00:12:45.096 IO depths : 1=0.8%, 2=1.8%, 4=3.9%, 8=9.2%, 16=22.9%, 32=59.4%, >=64=2.1% 00:12:45.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.096 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:45.096 issued rwts: total=0,173336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.096 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:45.096 00:12:45.096 Run status group 0 (all jobs): 00:12:45.096 WRITE: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=677MiB (710MB), run=5001-5001msec 00:12:45.358 ----------------------------------------------------- 00:12:45.358 Suppressions used: 00:12:45.358 count bytes template 00:12:45.358 1 11 /usr/src/fio/parse.c 00:12:45.358 1 8 libtcmalloc_minimal.so 00:12:45.358 1 904 libcrypto.so 00:12:45.358 ----------------------------------------------------- 00:12:45.358 00:12:45.358 ************************************ 00:12:45.358 END TEST xnvme_fio_plugin 00:12:45.358 ************************************ 00:12:45.358 00:12:45.358 real 0m13.871s 00:12:45.358 user 0m8.641s 00:12:45.358 sys 0m4.155s 00:12:45.358 03:00:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.358 03:00:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:45.358 03:00:39 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:45.358 03:00:39 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:45.358 03:00:39 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:45.358 03:00:39 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:45.358 03:00:39 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:45.358 03:00:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:45.358 03:00:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:45.358 03:00:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:45.358 03:00:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:45.358 03:00:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:45.358 03:00:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.358 03:00:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.358 ************************************ 00:12:45.358 START TEST xnvme_rpc 00:12:45.358 ************************************ 00:12:45.358 03:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:45.358 03:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:45.358 03:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:45.358 03:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:45.358 03:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:45.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.358 03:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69875 00:12:45.358 03:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69875 00:12:45.358 03:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69875 ']' 00:12:45.358 03:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.358 03:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.358 03:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.358 03:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.358 03:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:45.358 03:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.358 [2024-12-10 03:00:39.678390] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:45.358 [2024-12-10 03:00:39.678539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69875 ] 00:12:45.619 [2024-12-10 03:00:39.835165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.619 [2024-12-10 03:00:39.961237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.562 xnvme_bdev 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69875 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69875 ']' 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69875 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69875 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.562 killing process with pid 69875 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69875' 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69875 00:12:46.562 03:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69875 00:12:48.475 ************************************ 00:12:48.475 END TEST xnvme_rpc 00:12:48.475 ************************************ 00:12:48.475 00:12:48.475 real 0m2.922s 00:12:48.475 user 0m3.000s 00:12:48.475 sys 0m0.481s 00:12:48.476 03:00:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.476 03:00:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.476 03:00:42 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:48.476 03:00:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:48.476 03:00:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.476 03:00:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.476 ************************************ 00:12:48.476 START TEST xnvme_bdevperf 00:12:48.476 ************************************ 00:12:48.476 03:00:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:48.476 03:00:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:48.476 03:00:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:12:48.476 03:00:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:48.476 03:00:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:48.476 03:00:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:48.476 03:00:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:48.476 03:00:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:48.476 { 00:12:48.476 "subsystems": [ 00:12:48.476 { 00:12:48.476 "subsystem": "bdev", 00:12:48.476 "config": [ 00:12:48.476 { 00:12:48.476 "params": { 00:12:48.476 "io_mechanism": "io_uring", 00:12:48.476 "conserve_cpu": false, 00:12:48.476 "filename": "/dev/nvme0n1", 00:12:48.476 "name": "xnvme_bdev" 00:12:48.476 }, 00:12:48.476 "method": "bdev_xnvme_create" 00:12:48.476 }, 00:12:48.476 { 00:12:48.476 "method": "bdev_wait_for_examine" 00:12:48.476 } 00:12:48.476 ] 00:12:48.476 } 00:12:48.476 ] 00:12:48.476 } 00:12:48.476 [2024-12-10 03:00:42.653025] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:48.476 [2024-12-10 03:00:42.653353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69949 ] 00:12:48.476 [2024-12-10 03:00:42.817561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.736 [2024-12-10 03:00:42.942153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.998 Running I/O for 5 seconds... 00:12:50.884 30178.00 IOPS, 117.88 MiB/s [2024-12-10T03:00:46.681Z] 30415.00 IOPS, 118.81 MiB/s [2024-12-10T03:00:47.273Z] 30598.33 IOPS, 119.52 MiB/s [2024-12-10T03:00:48.661Z] 30611.75 IOPS, 119.58 MiB/s 00:12:54.274 Latency(us) 00:12:54.274 [2024-12-10T03:00:48.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.274 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:54.274 xnvme_bdev : 5.00 30434.51 118.88 0.00 0.00 2098.81 1235.10 8519.68 00:12:54.274 [2024-12-10T03:00:48.662Z] =================================================================================================================== 00:12:54.274 [2024-12-10T03:00:48.662Z] Total : 30434.51 118.88 0.00 0.00 2098.81 1235.10 8519.68 00:12:54.847 03:00:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:54.847 03:00:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:54.847 03:00:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:54.847 03:00:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:54.847 03:00:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:54.847 { 00:12:54.847 "subsystems": [ 00:12:54.847 { 00:12:54.847 "subsystem": "bdev", 00:12:54.847 "config": [ 00:12:54.847 { 00:12:54.847 "params": { 00:12:54.847 "io_mechanism": "io_uring", 00:12:54.847 "conserve_cpu": false, 00:12:54.847 "filename": "/dev/nvme0n1", 00:12:54.847 "name": "xnvme_bdev" 00:12:54.847 }, 00:12:54.847 "method": "bdev_xnvme_create" 00:12:54.847 }, 00:12:54.847 { 00:12:54.847 "method": "bdev_wait_for_examine" 00:12:54.847 } 00:12:54.847 ] 00:12:54.847 } 00:12:54.847 ] 00:12:54.847 } 00:12:54.847 [2024-12-10 03:00:49.107992] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:54.847 [2024-12-10 03:00:49.108331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70030 ] 00:12:55.108 [2024-12-10 03:00:49.274437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.108 [2024-12-10 03:00:49.399281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.370 Running I/O for 5 seconds... 00:12:57.701 32065.00 IOPS, 125.25 MiB/s [2024-12-10T03:00:53.034Z] 31634.00 IOPS, 123.57 MiB/s [2024-12-10T03:00:53.979Z] 31681.00 IOPS, 123.75 MiB/s [2024-12-10T03:00:54.919Z] 31612.00 IOPS, 123.48 MiB/s 00:13:00.531 Latency(us) 00:13:00.531 [2024-12-10T03:00:54.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.531 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:00.531 xnvme_bdev : 5.00 31259.62 122.11 0.00 0.00 2043.34 381.24 5646.18 00:13:00.531 [2024-12-10T03:00:54.919Z] =================================================================================================================== 00:13:00.531 [2024-12-10T03:00:54.919Z] Total : 31259.62 122.11 0.00 0.00 2043.34 381.24 5646.18 00:13:01.099 00:13:01.099 real 0m12.895s 00:13:01.099 user 0m6.217s 00:13:01.099 sys 0m6.401s 00:13:01.099 ************************************ 00:13:01.099 END TEST xnvme_bdevperf 00:13:01.099 ************************************ 00:13:01.099 03:00:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.099 03:00:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:01.360 03:00:55 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:01.360 03:00:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:01.360 03:00:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.360 03:00:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:01.360 ************************************ 00:13:01.360 START TEST xnvme_fio_plugin 00:13:01.360 ************************************ 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:01.360 03:00:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:01.360 { 00:13:01.360 "subsystems": [ 00:13:01.360 { 00:13:01.360 "subsystem": "bdev", 00:13:01.360 "config": [ 00:13:01.360 { 00:13:01.360 "params": { 00:13:01.360 "io_mechanism": "io_uring", 00:13:01.360 "conserve_cpu": false, 00:13:01.360 "filename": "/dev/nvme0n1", 00:13:01.360 "name": "xnvme_bdev" 00:13:01.360 }, 00:13:01.360 "method": "bdev_xnvme_create" 00:13:01.360 }, 00:13:01.360 { 00:13:01.360 "method": "bdev_wait_for_examine" 00:13:01.360 } 00:13:01.360 ] 00:13:01.360 } 00:13:01.360 ] 00:13:01.360 } 00:13:01.622 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:01.622 fio-3.35 00:13:01.622 Starting 1 thread 00:13:08.211 00:13:08.211 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70148: Tue Dec 10 03:01:01 2024 00:13:08.211 read: IOPS=29.5k, BW=115MiB/s (121MB/s)(575MiB/5001msec) 00:13:08.211 slat (usec): min=2, max=191, avg= 3.62, stdev= 2.48 00:13:08.211 clat (usec): min=1072, max=6629, avg=2024.42, stdev=348.40 00:13:08.211 lat (usec): min=1074, max=6638, avg=2028.04, stdev=348.73 00:13:08.211 clat percentiles (usec): 00:13:08.211 | 1.00th=[ 1401], 5.00th=[ 1532], 10.00th=[ 1614], 20.00th=[ 1745], 00:13:08.211 | 30.00th=[ 1827], 40.00th=[ 1909], 50.00th=[ 1991], 60.00th=[ 2073], 00:13:08.211 | 70.00th=[ 2180], 80.00th=[ 2278], 90.00th=[ 2442], 95.00th=[ 2606], 00:13:08.211 | 99.00th=[ 2999], 99.50th=[ 3195], 99.90th=[ 3884], 99.95th=[ 4178], 00:13:08.211 | 99.99th=[ 6587] 00:13:08.211 bw ( KiB/s): min=115200, max=121856, per=100.00%, avg=118017.22, stdev=2003.53, samples=9 00:13:08.211 iops : min=28800, max=30464, avg=29504.22, stdev=500.88, samples=9 00:13:08.211 lat (msec) : 2=50.65%, 4=49.27%, 10=0.08% 00:13:08.211 cpu : usr=30.70%, sys=67.58%, ctx=11, majf=0, minf=762 00:13:08.211 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:08.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.211 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:08.211 issued rwts: total=147327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:08.211 00:13:08.211 Run status group 0 (all jobs): 00:13:08.211 READ: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=575MiB (603MB), run=5001-5001msec 00:13:08.211 ----------------------------------------------------- 00:13:08.211 Suppressions used: 00:13:08.211 count bytes template 00:13:08.211 1 11 /usr/src/fio/parse.c 00:13:08.211 1 8 libtcmalloc_minimal.so 00:13:08.211 1 904 libcrypto.so 00:13:08.211 ----------------------------------------------------- 00:13:08.211 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:08.211 03:01:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:08.211 { 00:13:08.211 "subsystems": [ 00:13:08.211 { 00:13:08.211 "subsystem": "bdev", 00:13:08.211 "config": [ 00:13:08.211 { 00:13:08.211 "params": { 00:13:08.211 "io_mechanism": "io_uring", 00:13:08.211 "conserve_cpu": false, 00:13:08.211 "filename": "/dev/nvme0n1", 00:13:08.211 "name": "xnvme_bdev" 00:13:08.211 }, 00:13:08.211 "method": "bdev_xnvme_create" 00:13:08.211 }, 00:13:08.211 { 00:13:08.211 "method": "bdev_wait_for_examine" 00:13:08.211 } 00:13:08.211 ] 00:13:08.211 } 00:13:08.211 ] 00:13:08.211 } 00:13:08.473 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:08.473 fio-3.35 00:13:08.473 Starting 1 thread 00:13:15.070 00:13:15.070 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70235: Tue Dec 10 03:01:08 2024 00:13:15.070 write: IOPS=30.9k, BW=121MiB/s (126MB/s)(603MiB/5001msec); 0 zone resets 00:13:15.070 slat (nsec): min=2906, max=86301, avg=3598.16, stdev=1895.86 00:13:15.070 clat (usec): min=1033, max=5133, avg=1926.60, stdev=328.99 00:13:15.070 lat (usec): min=1036, max=5136, avg=1930.20, stdev=329.26 00:13:15.070 clat percentiles (usec): 00:13:15.070 | 1.00th=[ 1287], 5.00th=[ 1450], 10.00th=[ 1532], 20.00th=[ 1647], 00:13:15.070 | 30.00th=[ 1745], 40.00th=[ 1827], 50.00th=[ 1909], 60.00th=[ 1975], 00:13:15.070 | 70.00th=[ 2073], 80.00th=[ 2180], 90.00th=[ 2343], 95.00th=[ 2474], 00:13:15.070 | 99.00th=[ 2835], 99.50th=[ 3064], 99.90th=[ 3687], 99.95th=[ 3949], 00:13:15.070 | 99.99th=[ 4146] 00:13:15.070 bw ( KiB/s): min=117600, max=131416, per=100.00%, avg=123730.00, stdev=4799.87, samples=9 00:13:15.070 iops : min=29400, max=32854, avg=30932.44, stdev=1199.98, samples=9 00:13:15.070 lat (msec) : 2=62.09%, 4=37.87%, 10=0.04% 00:13:15.070 cpu : usr=31.52%, sys=67.34%, ctx=12, majf=0, minf=762 00:13:15.070 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:13:15.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.070 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:15.070 issued rwts: total=0,154448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.070 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:15.070 00:13:15.070 Run status group 0 (all jobs): 00:13:15.070 WRITE: bw=121MiB/s (126MB/s), 121MiB/s-121MiB/s (126MB/s-126MB/s), io=603MiB (633MB), run=5001-5001msec 00:13:15.070 ----------------------------------------------------- 00:13:15.070 Suppressions used: 00:13:15.070 count bytes template 00:13:15.070 1 11 /usr/src/fio/parse.c 00:13:15.070 1 8 libtcmalloc_minimal.so 00:13:15.070 1 904 libcrypto.so 00:13:15.070 ----------------------------------------------------- 00:13:15.070 00:13:15.070 00:13:15.070 real 0m13.806s 00:13:15.070 user 0m6.081s 00:13:15.070 sys 0m7.269s 00:13:15.070 03:01:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.070 03:01:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:15.070 ************************************ 00:13:15.070 END TEST xnvme_fio_plugin 00:13:15.070 ************************************ 00:13:15.070 03:01:09 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:15.070 03:01:09 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:15.070 03:01:09 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:15.070 03:01:09 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:15.070 03:01:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:15.070 03:01:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.070 03:01:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.070 ************************************ 00:13:15.070 START TEST xnvme_rpc 00:13:15.070 ************************************ 00:13:15.070 03:01:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:15.070 03:01:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:15.070 03:01:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:15.070 03:01:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:15.070 03:01:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:15.070 03:01:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70316 00:13:15.070 03:01:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70316 00:13:15.070 03:01:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70316 ']' 00:13:15.070 03:01:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.070 03:01:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.070 03:01:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.070 03:01:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.070 03:01:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.070 03:01:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:15.331 [2024-12-10 03:01:09.507371] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:15.331 [2024-12-10 03:01:09.507538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70316 ] 00:13:15.331 [2024-12-10 03:01:09.671448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.592 [2024-12-10 03:01:09.804676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.165 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.165 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:16.165 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:16.165 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.165 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.165 xnvme_bdev 00:13:16.165 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.165 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:16.165 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:16.165 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.165 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:16.165 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70316 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70316 ']' 00:13:16.425 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70316 00:13:16.426 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:16.426 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.426 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70316 00:13:16.426 killing process with pid 70316 00:13:16.426 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.426 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.426 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70316' 00:13:16.426 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70316 00:13:16.426 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70316 00:13:18.343 00:13:18.343 real 0m2.961s 00:13:18.343 user 0m2.958s 00:13:18.343 sys 0m0.480s 00:13:18.343 03:01:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.343 03:01:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.343 ************************************ 00:13:18.343 END TEST xnvme_rpc 00:13:18.343 ************************************ 00:13:18.343 03:01:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:18.343 03:01:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:18.343 03:01:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.343 03:01:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.343 ************************************ 00:13:18.343 START TEST xnvme_bdevperf 00:13:18.343 ************************************ 00:13:18.343 03:01:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:18.343 03:01:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:18.343 03:01:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:18.343 03:01:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:18.343 03:01:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:18.343 03:01:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:18.343 03:01:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:18.343 03:01:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:18.343 { 00:13:18.343 "subsystems": [ 00:13:18.343 { 00:13:18.343 "subsystem": "bdev", 00:13:18.343 "config": [ 00:13:18.343 { 00:13:18.343 "params": { 00:13:18.343 "io_mechanism": "io_uring", 00:13:18.343 "conserve_cpu": true, 00:13:18.343 "filename": "/dev/nvme0n1", 00:13:18.343 "name": "xnvme_bdev" 00:13:18.343 }, 00:13:18.343 "method": "bdev_xnvme_create" 00:13:18.343 }, 00:13:18.343 { 00:13:18.343 "method": "bdev_wait_for_examine" 00:13:18.343 } 00:13:18.343 ] 00:13:18.343 } 00:13:18.343 ] 00:13:18.343 } 00:13:18.343 [2024-12-10 03:01:12.515764] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:18.343 [2024-12-10 03:01:12.516080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70390 ] 00:13:18.343 [2024-12-10 03:01:12.677976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.604 [2024-12-10 03:01:12.806951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.866 Running I/O for 5 seconds... 00:13:20.750 29993.00 IOPS, 117.16 MiB/s [2024-12-10T03:01:16.525Z] 30688.50 IOPS, 119.88 MiB/s [2024-12-10T03:01:17.470Z] 30864.00 IOPS, 120.56 MiB/s [2024-12-10T03:01:18.412Z] 30983.50 IOPS, 121.03 MiB/s 00:13:24.024 Latency(us) 00:13:24.024 [2024-12-10T03:01:18.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.024 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:24.024 xnvme_bdev : 5.00 31748.88 124.02 0.00 0.00 2011.80 863.31 6200.71 00:13:24.024 [2024-12-10T03:01:18.412Z] =================================================================================================================== 00:13:24.024 [2024-12-10T03:01:18.412Z] Total : 31748.88 124.02 0.00 0.00 2011.80 863.31 6200.71 00:13:24.597 03:01:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:24.597 03:01:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:24.597 03:01:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:24.597 03:01:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:24.597 03:01:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:24.597 { 00:13:24.597 "subsystems": [ 00:13:24.597 { 00:13:24.597 "subsystem": "bdev", 00:13:24.597 "config": [ 00:13:24.597 { 00:13:24.597 "params": { 00:13:24.597 "io_mechanism": "io_uring", 00:13:24.597 "conserve_cpu": true, 00:13:24.597 "filename": "/dev/nvme0n1", 00:13:24.597 "name": "xnvme_bdev" 00:13:24.597 }, 00:13:24.597 "method": "bdev_xnvme_create" 00:13:24.597 }, 00:13:24.597 { 00:13:24.597 "method": "bdev_wait_for_examine" 00:13:24.597 } 00:13:24.597 ] 00:13:24.597 } 00:13:24.597 ] 00:13:24.597 } 00:13:24.858 [2024-12-10 03:01:18.984849] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:24.858 [2024-12-10 03:01:18.984989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70471 ] 00:13:24.858 [2024-12-10 03:01:19.141629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.119 [2024-12-10 03:01:19.271926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.380 Running I/O for 5 seconds... 00:13:27.262 32686.00 IOPS, 127.68 MiB/s [2024-12-10T03:01:22.594Z] 32009.00 IOPS, 125.04 MiB/s [2024-12-10T03:01:23.978Z] 31812.00 IOPS, 124.27 MiB/s [2024-12-10T03:01:24.920Z] 31885.75 IOPS, 124.55 MiB/s 00:13:30.532 Latency(us) 00:13:30.532 [2024-12-10T03:01:24.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.532 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:30.532 xnvme_bdev : 5.00 31663.69 123.69 0.00 0.00 2017.12 513.58 8217.21 00:13:30.532 [2024-12-10T03:01:24.920Z] =================================================================================================================== 00:13:30.532 [2024-12-10T03:01:24.920Z] Total : 31663.69 123.69 0.00 0.00 2017.12 513.58 8217.21 00:13:31.104 ************************************ 00:13:31.104 END TEST xnvme_bdevperf 00:13:31.104 ************************************ 00:13:31.104 00:13:31.104 real 0m12.934s 00:13:31.104 user 0m9.136s 00:13:31.104 sys 0m3.263s 00:13:31.104 03:01:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.104 03:01:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:31.104 03:01:25 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:31.104 03:01:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:31.104 03:01:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.104 03:01:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.104 ************************************ 00:13:31.104 START TEST xnvme_fio_plugin 00:13:31.104 ************************************ 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:31.104 03:01:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:31.104 { 00:13:31.104 "subsystems": [ 00:13:31.104 { 00:13:31.104 "subsystem": "bdev", 00:13:31.104 "config": [ 00:13:31.104 { 00:13:31.104 "params": { 00:13:31.104 "io_mechanism": "io_uring", 00:13:31.104 "conserve_cpu": true, 00:13:31.104 "filename": "/dev/nvme0n1", 00:13:31.104 "name": "xnvme_bdev" 00:13:31.104 }, 00:13:31.104 "method": "bdev_xnvme_create" 00:13:31.104 }, 00:13:31.104 { 00:13:31.104 "method": "bdev_wait_for_examine" 00:13:31.104 } 00:13:31.104 ] 00:13:31.104 } 00:13:31.104 ] 00:13:31.104 } 00:13:31.363 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:31.363 fio-3.35 00:13:31.363 Starting 1 thread 00:13:37.950 00:13:37.950 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70590: Tue Dec 10 03:01:31 2024 00:13:37.950 read: IOPS=30.3k, BW=118MiB/s (124MB/s)(592MiB/5001msec) 00:13:37.950 slat (nsec): min=2857, max=99592, avg=3555.16, stdev=1880.57 00:13:37.950 clat (usec): min=1196, max=5689, avg=1966.65, stdev=312.83 00:13:37.950 lat (usec): min=1202, max=5692, avg=1970.20, stdev=313.18 00:13:37.950 clat percentiles (usec): 00:13:37.950 | 1.00th=[ 1385], 5.00th=[ 1516], 10.00th=[ 1598], 20.00th=[ 1696], 00:13:37.950 | 30.00th=[ 1778], 40.00th=[ 1860], 50.00th=[ 1926], 60.00th=[ 2008], 00:13:37.950 | 70.00th=[ 2114], 80.00th=[ 2212], 90.00th=[ 2376], 95.00th=[ 2540], 00:13:37.950 | 99.00th=[ 2835], 99.50th=[ 2999], 99.90th=[ 3425], 99.95th=[ 3523], 00:13:37.950 | 99.99th=[ 3752] 00:13:37.950 bw ( KiB/s): min=118784, max=122880, per=99.92%, avg=121172.44, stdev=1425.59, samples=9 00:13:37.950 iops : min=29696, max=30720, avg=30293.11, stdev=356.40, samples=9 00:13:37.950 lat (msec) : 2=58.51%, 4=41.49%, 10=0.01% 00:13:37.950 cpu : usr=65.78%, sys=30.66%, ctx=14, majf=0, minf=762 00:13:37.950 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:37.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.950 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:37.950 issued rwts: total=151614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:37.950 00:13:37.950 Run status group 0 (all jobs): 00:13:37.950 READ: bw=118MiB/s (124MB/s), 118MiB/s-118MiB/s (124MB/s-124MB/s), io=592MiB (621MB), run=5001-5001msec 00:13:37.950 ----------------------------------------------------- 00:13:37.950 Suppressions used: 00:13:37.950 count bytes template 00:13:37.950 1 11 /usr/src/fio/parse.c 00:13:37.950 1 8 libtcmalloc_minimal.so 00:13:37.950 1 904 libcrypto.so 00:13:37.950 ----------------------------------------------------- 00:13:37.950 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:38.212 03:01:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.212 { 00:13:38.212 "subsystems": [ 00:13:38.212 { 00:13:38.212 "subsystem": "bdev", 00:13:38.212 "config": [ 00:13:38.212 { 00:13:38.212 "params": { 00:13:38.212 "io_mechanism": "io_uring", 00:13:38.212 "conserve_cpu": true, 00:13:38.212 "filename": "/dev/nvme0n1", 00:13:38.212 "name": "xnvme_bdev" 00:13:38.212 }, 00:13:38.212 "method": "bdev_xnvme_create" 00:13:38.212 }, 00:13:38.212 { 00:13:38.212 "method": "bdev_wait_for_examine" 00:13:38.212 } 00:13:38.212 ] 00:13:38.212 } 00:13:38.212 ] 00:13:38.212 } 00:13:38.212 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:38.212 fio-3.35 00:13:38.212 Starting 1 thread 00:13:44.803 00:13:44.803 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70676: Tue Dec 10 03:01:38 2024 00:13:44.803 write: IOPS=43.7k, BW=171MiB/s (179MB/s)(854MiB/5002msec); 0 zone resets 00:13:44.803 slat (nsec): min=2913, max=61718, avg=3661.02, stdev=1492.20 00:13:44.803 clat (usec): min=600, max=6394, avg=1321.61, stdev=540.03 00:13:44.803 lat (usec): min=604, max=6408, avg=1325.27, stdev=540.06 00:13:44.803 clat percentiles (usec): 00:13:44.803 | 1.00th=[ 685], 5.00th=[ 725], 10.00th=[ 758], 20.00th=[ 824], 00:13:44.803 | 30.00th=[ 889], 40.00th=[ 971], 50.00th=[ 1123], 60.00th=[ 1418], 00:13:44.803 | 70.00th=[ 1680], 80.00th=[ 1876], 90.00th=[ 2089], 95.00th=[ 2245], 00:13:44.803 | 99.00th=[ 2540], 99.50th=[ 2737], 99.90th=[ 3294], 99.95th=[ 3523], 00:13:44.803 | 99.99th=[ 6325] 00:13:44.803 bw ( KiB/s): min=122808, max=250368, per=99.60%, avg=174065.56, stdev=59881.45, samples=9 00:13:44.803 iops : min=30702, max=62592, avg=43516.33, stdev=14970.41, samples=9 00:13:44.803 lat (usec) : 750=8.93%, 1000=33.23% 00:13:44.803 lat (msec) : 2=43.93%, 4=13.88%, 10=0.03% 00:13:44.803 cpu : usr=55.53%, sys=41.07%, ctx=9, majf=0, minf=762 00:13:44.803 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:44.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.803 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:44.803 issued rwts: total=0,218548,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.803 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:44.803 00:13:44.803 Run status group 0 (all jobs): 00:13:44.803 WRITE: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=854MiB (895MB), run=5002-5002msec 00:13:45.065 ----------------------------------------------------- 00:13:45.065 Suppressions used: 00:13:45.065 count bytes template 00:13:45.065 1 11 /usr/src/fio/parse.c 00:13:45.065 1 8 libtcmalloc_minimal.so 00:13:45.065 1 904 libcrypto.so 00:13:45.065 ----------------------------------------------------- 00:13:45.065 00:13:45.065 ************************************ 00:13:45.065 END TEST xnvme_fio_plugin 00:13:45.065 ************************************ 00:13:45.065 00:13:45.065 real 0m13.787s 00:13:45.065 user 0m8.925s 00:13:45.065 sys 0m4.182s 00:13:45.065 03:01:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.065 03:01:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:45.065 03:01:39 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:45.065 03:01:39 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:45.065 03:01:39 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:45.065 03:01:39 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:45.065 03:01:39 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:45.065 03:01:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:45.065 03:01:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:45.065 03:01:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:45.065 03:01:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:45.065 03:01:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:45.065 03:01:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.065 03:01:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.065 ************************************ 00:13:45.065 START TEST xnvme_rpc 00:13:45.065 ************************************ 00:13:45.065 03:01:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:45.065 03:01:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:45.065 03:01:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:45.065 03:01:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:45.065 03:01:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:45.065 03:01:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70762 00:13:45.065 03:01:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70762 00:13:45.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.065 03:01:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70762 ']' 00:13:45.065 03:01:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.065 03:01:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:45.065 03:01:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.065 03:01:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.065 03:01:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.065 03:01:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.065 [2024-12-10 03:01:39.383599] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:45.065 [2024-12-10 03:01:39.383764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70762 ] 00:13:45.326 [2024-12-10 03:01:39.544959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.326 [2024-12-10 03:01:39.663670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.272 xnvme_bdev 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70762 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70762 ']' 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70762 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70762 00:13:46.272 killing process with pid 70762 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70762' 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70762 00:13:46.272 03:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70762 00:13:48.188 ************************************ 00:13:48.188 END TEST xnvme_rpc 00:13:48.188 ************************************ 00:13:48.188 00:13:48.188 real 0m2.886s 00:13:48.188 user 0m2.891s 00:13:48.188 sys 0m0.478s 00:13:48.188 03:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.188 03:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.188 03:01:42 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:48.188 03:01:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:48.188 03:01:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.188 03:01:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:48.188 ************************************ 00:13:48.188 START TEST xnvme_bdevperf 00:13:48.188 ************************************ 00:13:48.188 03:01:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:48.188 03:01:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:48.188 03:01:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:13:48.188 03:01:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:48.188 03:01:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:48.188 03:01:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:48.188 03:01:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:48.188 03:01:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:48.188 { 00:13:48.188 "subsystems": [ 00:13:48.188 { 00:13:48.188 "subsystem": "bdev", 00:13:48.188 "config": [ 00:13:48.188 { 00:13:48.188 "params": { 00:13:48.188 "io_mechanism": "io_uring_cmd", 00:13:48.188 "conserve_cpu": false, 00:13:48.188 "filename": "/dev/ng0n1", 00:13:48.188 "name": "xnvme_bdev" 00:13:48.188 }, 00:13:48.188 "method": "bdev_xnvme_create" 00:13:48.189 }, 00:13:48.189 { 00:13:48.189 "method": "bdev_wait_for_examine" 00:13:48.189 } 00:13:48.189 ] 00:13:48.189 } 00:13:48.189 ] 00:13:48.189 } 00:13:48.189 [2024-12-10 03:01:42.322649] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:48.189 [2024-12-10 03:01:42.322961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70831 ] 00:13:48.189 [2024-12-10 03:01:42.482254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.450 [2024-12-10 03:01:42.585638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.738 Running I/O for 5 seconds... 00:13:50.629 31847.00 IOPS, 124.40 MiB/s [2024-12-10T03:01:45.958Z] 31326.50 IOPS, 122.37 MiB/s [2024-12-10T03:01:46.900Z] 31055.00 IOPS, 121.31 MiB/s [2024-12-10T03:01:48.280Z] 30999.00 IOPS, 121.09 MiB/s [2024-12-10T03:01:48.280Z] 31234.80 IOPS, 122.01 MiB/s 00:13:53.892 Latency(us) 00:13:53.892 [2024-12-10T03:01:48.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.892 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:53.892 xnvme_bdev : 5.00 31234.48 122.01 0.00 0.00 2045.01 371.79 5167.26 00:13:53.892 [2024-12-10T03:01:48.280Z] =================================================================================================================== 00:13:53.892 [2024-12-10T03:01:48.280Z] Total : 31234.48 122.01 0.00 0.00 2045.01 371.79 5167.26 00:13:54.464 03:01:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:54.464 03:01:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:54.464 03:01:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:54.464 03:01:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:54.464 03:01:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:54.464 { 00:13:54.464 "subsystems": [ 00:13:54.464 { 00:13:54.464 "subsystem": "bdev", 00:13:54.464 "config": [ 00:13:54.464 { 00:13:54.464 "params": { 00:13:54.464 "io_mechanism": "io_uring_cmd", 00:13:54.464 "conserve_cpu": false, 00:13:54.464 "filename": "/dev/ng0n1", 00:13:54.464 "name": "xnvme_bdev" 00:13:54.464 }, 00:13:54.464 "method": "bdev_xnvme_create" 00:13:54.464 }, 00:13:54.464 { 00:13:54.464 "method": "bdev_wait_for_examine" 00:13:54.464 } 00:13:54.464 ] 00:13:54.464 } 00:13:54.464 ] 00:13:54.464 } 00:13:54.464 [2024-12-10 03:01:48.690086] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:54.464 [2024-12-10 03:01:48.690221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70911 ] 00:13:54.726 [2024-12-10 03:01:48.854991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.726 [2024-12-10 03:01:48.984305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.988 Running I/O for 5 seconds... 00:13:56.934 32119.00 IOPS, 125.46 MiB/s [2024-12-10T03:01:52.708Z] 32369.50 IOPS, 126.44 MiB/s [2024-12-10T03:01:53.648Z] 32530.67 IOPS, 127.07 MiB/s [2024-12-10T03:01:54.593Z] 32446.75 IOPS, 126.75 MiB/s 00:14:00.205 Latency(us) 00:14:00.205 [2024-12-10T03:01:54.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.205 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:00.205 xnvme_bdev : 5.00 32256.97 126.00 0.00 0.00 1980.24 415.90 9124.63 00:14:00.205 [2024-12-10T03:01:54.593Z] =================================================================================================================== 00:14:00.205 [2024-12-10T03:01:54.593Z] Total : 32256.97 126.00 0.00 0.00 1980.24 415.90 9124.63 00:14:00.777 03:01:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:00.777 03:01:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:00.777 03:01:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:00.777 03:01:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:00.778 03:01:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:00.778 { 00:14:00.778 "subsystems": [ 00:14:00.778 { 00:14:00.778 "subsystem": "bdev", 00:14:00.778 "config": [ 00:14:00.778 { 00:14:00.778 "params": { 00:14:00.778 "io_mechanism": "io_uring_cmd", 00:14:00.778 "conserve_cpu": false, 00:14:00.778 "filename": "/dev/ng0n1", 00:14:00.778 "name": "xnvme_bdev" 00:14:00.778 }, 00:14:00.778 "method": "bdev_xnvme_create" 00:14:00.778 }, 00:14:00.778 { 00:14:00.778 "method": "bdev_wait_for_examine" 00:14:00.778 } 00:14:00.778 ] 00:14:00.778 } 00:14:00.778 ] 00:14:00.778 } 00:14:00.778 [2024-12-10 03:01:55.152930] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:14:00.778 [2024-12-10 03:01:55.153283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70985 ] 00:14:01.039 [2024-12-10 03:01:55.326397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.300 [2024-12-10 03:01:55.446812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.561 Running I/O for 5 seconds... 00:14:03.452 78848.00 IOPS, 308.00 MiB/s [2024-12-10T03:01:58.785Z] 78880.00 IOPS, 308.12 MiB/s [2024-12-10T03:02:00.173Z] 78869.33 IOPS, 308.08 MiB/s [2024-12-10T03:02:01.113Z] 78880.00 IOPS, 308.12 MiB/s [2024-12-10T03:02:01.113Z] 80563.20 IOPS, 314.70 MiB/s 00:14:06.725 Latency(us) 00:14:06.725 [2024-12-10T03:02:01.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.725 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:06.725 xnvme_bdev : 5.00 80539.53 314.61 0.00 0.00 791.27 532.48 3377.62 00:14:06.725 [2024-12-10T03:02:01.113Z] =================================================================================================================== 00:14:06.725 [2024-12-10T03:02:01.113Z] Total : 80539.53 314.61 0.00 0.00 791.27 532.48 3377.62 00:14:07.291 03:02:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.291 03:02:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:07.291 03:02:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:07.291 03:02:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:07.291 03:02:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.291 { 00:14:07.291 "subsystems": [ 00:14:07.291 { 00:14:07.291 "subsystem": "bdev", 00:14:07.291 "config": [ 00:14:07.291 { 00:14:07.291 "params": { 00:14:07.291 "io_mechanism": "io_uring_cmd", 00:14:07.291 "conserve_cpu": false, 00:14:07.291 "filename": "/dev/ng0n1", 00:14:07.291 "name": "xnvme_bdev" 00:14:07.291 }, 00:14:07.291 "method": "bdev_xnvme_create" 00:14:07.291 }, 00:14:07.291 { 00:14:07.291 "method": "bdev_wait_for_examine" 00:14:07.291 } 00:14:07.291 ] 00:14:07.291 } 00:14:07.291 ] 00:14:07.291 } 00:14:07.291 [2024-12-10 03:02:01.534999] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:14:07.291 [2024-12-10 03:02:01.535110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71059 ] 00:14:07.552 [2024-12-10 03:02:01.694755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.552 [2024-12-10 03:02:01.791805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.811 Running I/O for 5 seconds... 00:14:09.772 8534.00 IOPS, 33.34 MiB/s [2024-12-10T03:02:05.104Z] 9316.00 IOPS, 36.39 MiB/s [2024-12-10T03:02:06.486Z] 11112.33 IOPS, 43.41 MiB/s [2024-12-10T03:02:07.429Z] 11581.25 IOPS, 45.24 MiB/s [2024-12-10T03:02:07.429Z] 10696.20 IOPS, 41.78 MiB/s 00:14:13.041 Latency(us) 00:14:13.041 [2024-12-10T03:02:07.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.041 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:13.041 xnvme_bdev : 5.01 10687.67 41.75 0.00 0.00 5978.79 133.12 729163.62 00:14:13.041 [2024-12-10T03:02:07.429Z] =================================================================================================================== 00:14:13.041 [2024-12-10T03:02:07.429Z] Total : 10687.67 41.75 0.00 0.00 5978.79 133.12 729163.62 00:14:13.615 ************************************ 00:14:13.615 END TEST xnvme_bdevperf 00:14:13.615 ************************************ 00:14:13.615 00:14:13.615 real 0m25.627s 00:14:13.615 user 0m14.898s 00:14:13.615 sys 0m10.243s 00:14:13.615 03:02:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.615 03:02:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:13.615 03:02:07 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:13.615 03:02:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:13.615 03:02:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.615 03:02:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:13.615 ************************************ 00:14:13.615 START TEST xnvme_fio_plugin 00:14:13.615 ************************************ 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:13.615 03:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:13.615 { 00:14:13.615 "subsystems": [ 00:14:13.615 { 00:14:13.615 "subsystem": "bdev", 00:14:13.615 "config": [ 00:14:13.615 { 00:14:13.615 "params": { 00:14:13.615 "io_mechanism": "io_uring_cmd", 00:14:13.615 "conserve_cpu": false, 00:14:13.615 "filename": "/dev/ng0n1", 00:14:13.615 "name": "xnvme_bdev" 00:14:13.615 }, 00:14:13.615 "method": "bdev_xnvme_create" 00:14:13.615 }, 00:14:13.615 { 00:14:13.615 "method": "bdev_wait_for_examine" 00:14:13.615 } 00:14:13.615 ] 00:14:13.615 } 00:14:13.615 ] 00:14:13.615 } 00:14:13.876 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:13.876 fio-3.35 00:14:13.876 Starting 1 thread 00:14:20.471 00:14:20.471 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71172: Tue Dec 10 03:02:13 2024 00:14:20.471 read: IOPS=33.1k, BW=129MiB/s (135MB/s)(646MiB/5001msec) 00:14:20.471 slat (usec): min=2, max=308, avg= 4.06, stdev= 3.01 00:14:20.471 clat (usec): min=978, max=3897, avg=1770.32, stdev=334.67 00:14:20.471 lat (usec): min=981, max=3929, avg=1774.38, stdev=335.20 00:14:20.471 clat percentiles (usec): 00:14:20.471 | 1.00th=[ 1188], 5.00th=[ 1319], 10.00th=[ 1401], 20.00th=[ 1500], 00:14:20.471 | 30.00th=[ 1565], 40.00th=[ 1647], 50.00th=[ 1713], 60.00th=[ 1811], 00:14:20.471 | 70.00th=[ 1909], 80.00th=[ 2024], 90.00th=[ 2212], 95.00th=[ 2376], 00:14:20.471 | 99.00th=[ 2769], 99.50th=[ 2966], 99.90th=[ 3490], 99.95th=[ 3589], 00:14:20.471 | 99.99th=[ 3785] 00:14:20.471 bw ( KiB/s): min=128512, max=137728, per=100.00%, avg=132551.11, stdev=2638.10, samples=9 00:14:20.471 iops : min=32128, max=34432, avg=33137.78, stdev=659.52, samples=9 00:14:20.471 lat (usec) : 1000=0.01% 00:14:20.471 lat (msec) : 2=78.22%, 4=21.78% 00:14:20.471 cpu : usr=35.98%, sys=61.86%, ctx=64, majf=0, minf=762 00:14:20.471 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:20.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.471 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:20.471 issued rwts: total=165408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:20.471 00:14:20.471 Run status group 0 (all jobs): 00:14:20.471 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=646MiB (678MB), run=5001-5001msec 00:14:20.471 ----------------------------------------------------- 00:14:20.471 Suppressions used: 00:14:20.471 count bytes template 00:14:20.471 1 11 /usr/src/fio/parse.c 00:14:20.471 1 8 libtcmalloc_minimal.so 00:14:20.471 1 904 libcrypto.so 00:14:20.471 ----------------------------------------------------- 00:14:20.471 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:20.471 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:20.733 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:20.733 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:20.733 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:20.733 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:20.733 03:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:20.733 { 00:14:20.733 "subsystems": [ 00:14:20.733 { 00:14:20.733 "subsystem": "bdev", 00:14:20.733 "config": [ 00:14:20.733 { 00:14:20.733 "params": { 00:14:20.733 "io_mechanism": "io_uring_cmd", 00:14:20.733 "conserve_cpu": false, 00:14:20.733 "filename": "/dev/ng0n1", 00:14:20.733 "name": "xnvme_bdev" 00:14:20.733 }, 00:14:20.733 "method": "bdev_xnvme_create" 00:14:20.733 }, 00:14:20.733 { 00:14:20.733 "method": "bdev_wait_for_examine" 00:14:20.733 } 00:14:20.733 ] 00:14:20.733 } 00:14:20.733 ] 00:14:20.733 } 00:14:20.733 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:20.733 fio-3.35 00:14:20.733 Starting 1 thread 00:14:27.325 00:14:27.325 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71263: Tue Dec 10 03:02:20 2024 00:14:27.325 write: IOPS=16.7k, BW=65.2MiB/s (68.3MB/s)(326MiB/5009msec); 0 zone resets 00:14:27.325 slat (nsec): min=2904, max=80900, avg=3876.57, stdev=2425.98 00:14:27.325 clat (usec): min=60, max=18823, avg=3764.11, stdev=4686.29 00:14:27.325 lat (usec): min=63, max=18827, avg=3767.98, stdev=4686.28 00:14:27.325 clat percentiles (usec): 00:14:27.325 | 1.00th=[ 182], 5.00th=[ 408], 10.00th=[ 578], 20.00th=[ 717], 00:14:27.325 | 30.00th=[ 816], 40.00th=[ 1074], 50.00th=[ 1483], 60.00th=[ 1696], 00:14:27.325 | 70.00th=[ 2040], 80.00th=[10028], 90.00th=[12256], 95.00th=[13304], 00:14:27.325 | 99.00th=[15008], 99.50th=[15795], 99.90th=[17171], 99.95th=[17957], 00:14:27.325 | 99.99th=[18482] 00:14:27.325 bw ( KiB/s): min=51904, max=83448, per=100.00%, avg=66806.40, stdev=11620.57, samples=10 00:14:27.325 iops : min=12976, max=20862, avg=16701.60, stdev=2905.14, samples=10 00:14:27.325 lat (usec) : 100=0.09%, 250=1.60%, 500=6.18%, 750=16.20%, 1000=14.58% 00:14:27.325 lat (msec) : 2=30.56%, 4=6.05%, 10=4.60%, 20=20.14% 00:14:27.325 cpu : usr=33.99%, sys=65.10%, ctx=10, majf=0, minf=762 00:14:27.325 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.5%, 16=7.5%, 32=75.3%, >=64=10.6% 00:14:27.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.325 complete : 0=0.0%, 4=94.9%, 8=1.9%, 16=2.0%, 32=0.8%, 64=0.5%, >=64=0.0% 00:14:27.325 issued rwts: total=0,83571,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.325 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.325 00:14:27.325 Run status group 0 (all jobs): 00:14:27.325 WRITE: bw=65.2MiB/s (68.3MB/s), 65.2MiB/s-65.2MiB/s (68.3MB/s-68.3MB/s), io=326MiB (342MB), run=5009-5009msec 00:14:27.587 ----------------------------------------------------- 00:14:27.587 Suppressions used: 00:14:27.587 count bytes template 00:14:27.587 1 11 /usr/src/fio/parse.c 00:14:27.587 1 8 libtcmalloc_minimal.so 00:14:27.587 1 904 libcrypto.so 00:14:27.587 ----------------------------------------------------- 00:14:27.587 00:14:27.587 00:14:27.587 real 0m13.791s 00:14:27.587 user 0m6.394s 00:14:27.587 sys 0m6.905s 00:14:27.587 03:02:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.587 ************************************ 00:14:27.587 END TEST xnvme_fio_plugin 00:14:27.587 ************************************ 00:14:27.587 03:02:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:27.587 03:02:21 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:27.587 03:02:21 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:27.587 03:02:21 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:27.587 03:02:21 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:27.587 03:02:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:27.587 03:02:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.587 03:02:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.587 ************************************ 00:14:27.587 START TEST xnvme_rpc 00:14:27.587 ************************************ 00:14:27.587 03:02:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:27.587 03:02:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:27.587 03:02:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:27.587 03:02:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:27.587 03:02:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:27.587 03:02:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71348 00:14:27.587 03:02:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71348 00:14:27.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.587 03:02:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71348 ']' 00:14:27.587 03:02:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.587 03:02:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.587 03:02:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.587 03:02:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.587 03:02:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.587 03:02:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:27.587 [2024-12-10 03:02:21.900054] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:14:27.587 [2024-12-10 03:02:21.900206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71348 ] 00:14:27.848 [2024-12-10 03:02:22.065714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.848 [2024-12-10 03:02:22.183784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.792 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.792 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:28.792 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:28.792 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.792 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.792 xnvme_bdev 00:14:28.792 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.792 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:28.792 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:28.792 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:28.792 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.792 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.792 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.792 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.793 03:02:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71348 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71348 ']' 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71348 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71348 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:28.793 killing process with pid 71348 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71348' 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71348 00:14:28.793 03:02:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71348 00:14:30.710 00:14:30.710 real 0m2.897s 00:14:30.710 user 0m2.887s 00:14:30.710 sys 0m0.493s 00:14:30.710 03:02:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.710 ************************************ 00:14:30.710 END TEST xnvme_rpc 00:14:30.710 ************************************ 00:14:30.710 03:02:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.710 03:02:24 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:30.710 03:02:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:30.710 03:02:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.710 03:02:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:30.710 ************************************ 00:14:30.710 START TEST xnvme_bdevperf 00:14:30.710 ************************************ 00:14:30.710 03:02:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:30.710 03:02:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:30.710 03:02:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:30.711 03:02:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:30.711 03:02:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:30.711 03:02:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:30.711 03:02:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:30.711 03:02:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:30.711 { 00:14:30.711 "subsystems": [ 00:14:30.711 { 00:14:30.711 "subsystem": "bdev", 00:14:30.711 "config": [ 00:14:30.711 { 00:14:30.711 "params": { 00:14:30.711 "io_mechanism": "io_uring_cmd", 00:14:30.711 "conserve_cpu": true, 00:14:30.711 "filename": "/dev/ng0n1", 00:14:30.711 "name": "xnvme_bdev" 00:14:30.711 }, 00:14:30.711 "method": "bdev_xnvme_create" 00:14:30.711 }, 00:14:30.711 { 00:14:30.711 "method": "bdev_wait_for_examine" 00:14:30.711 } 00:14:30.711 ] 00:14:30.711 } 00:14:30.711 ] 00:14:30.711 } 00:14:30.711 [2024-12-10 03:02:24.849219] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:14:30.711 [2024-12-10 03:02:24.849363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71422 ] 00:14:30.711 [2024-12-10 03:02:25.006686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.973 [2024-12-10 03:02:25.123781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.238 Running I/O for 5 seconds... 00:14:33.125 33647.00 IOPS, 131.43 MiB/s [2024-12-10T03:02:28.456Z] 33875.00 IOPS, 132.32 MiB/s [2024-12-10T03:02:29.859Z] 34111.00 IOPS, 133.25 MiB/s [2024-12-10T03:02:30.430Z] 34060.50 IOPS, 133.05 MiB/s [2024-12-10T03:02:30.430Z] 34031.20 IOPS, 132.93 MiB/s 00:14:36.042 Latency(us) 00:14:36.042 [2024-12-10T03:02:30.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.042 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:36.042 xnvme_bdev : 5.01 34009.63 132.85 0.00 0.00 1877.36 945.23 4763.96 00:14:36.042 [2024-12-10T03:02:30.430Z] =================================================================================================================== 00:14:36.042 [2024-12-10T03:02:30.430Z] Total : 34009.63 132.85 0.00 0.00 1877.36 945.23 4763.96 00:14:36.992 03:02:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:36.992 03:02:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:36.992 03:02:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:36.992 03:02:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:36.992 03:02:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:36.992 { 00:14:36.992 "subsystems": [ 00:14:36.992 { 00:14:36.992 "subsystem": "bdev", 00:14:36.992 "config": [ 00:14:36.992 { 00:14:36.992 "params": { 00:14:36.992 "io_mechanism": "io_uring_cmd", 00:14:36.992 "conserve_cpu": true, 00:14:36.992 "filename": "/dev/ng0n1", 00:14:36.992 "name": "xnvme_bdev" 00:14:36.992 }, 00:14:36.992 "method": "bdev_xnvme_create" 00:14:36.992 }, 00:14:36.992 { 00:14:36.992 "method": "bdev_wait_for_examine" 00:14:36.992 } 00:14:36.992 ] 00:14:36.992 } 00:14:36.992 ] 00:14:36.992 } 00:14:36.992 [2024-12-10 03:02:31.280688] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:14:36.992 [2024-12-10 03:02:31.280826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71493 ] 00:14:37.255 [2024-12-10 03:02:31.446179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.255 [2024-12-10 03:02:31.568990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.516 Running I/O for 5 seconds... 00:14:39.853 19076.00 IOPS, 74.52 MiB/s [2024-12-10T03:02:35.184Z] 17698.50 IOPS, 69.13 MiB/s [2024-12-10T03:02:36.128Z] 17700.00 IOPS, 69.14 MiB/s [2024-12-10T03:02:37.072Z] 18025.25 IOPS, 70.41 MiB/s [2024-12-10T03:02:37.072Z] 18368.60 IOPS, 71.75 MiB/s 00:14:42.684 Latency(us) 00:14:42.684 [2024-12-10T03:02:37.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.684 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:42.684 xnvme_bdev : 5.01 18351.03 71.68 0.00 0.00 3480.57 75.22 18148.43 00:14:42.684 [2024-12-10T03:02:37.072Z] =================================================================================================================== 00:14:42.684 [2024-12-10T03:02:37.072Z] Total : 18351.03 71.68 0.00 0.00 3480.57 75.22 18148.43 00:14:43.627 03:02:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:43.628 03:02:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:43.628 03:02:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:43.628 03:02:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:43.628 03:02:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:43.628 { 00:14:43.628 "subsystems": [ 00:14:43.628 { 00:14:43.628 "subsystem": "bdev", 00:14:43.628 "config": [ 00:14:43.628 { 00:14:43.628 "params": { 00:14:43.628 "io_mechanism": "io_uring_cmd", 00:14:43.628 "conserve_cpu": true, 00:14:43.628 "filename": "/dev/ng0n1", 00:14:43.628 "name": "xnvme_bdev" 00:14:43.628 }, 00:14:43.628 "method": "bdev_xnvme_create" 00:14:43.628 }, 00:14:43.628 { 00:14:43.628 "method": "bdev_wait_for_examine" 00:14:43.628 } 00:14:43.628 ] 00:14:43.628 } 00:14:43.628 ] 00:14:43.628 } 00:14:43.628 [2024-12-10 03:02:37.733099] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:14:43.628 [2024-12-10 03:02:37.733245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71567 ] 00:14:43.628 [2024-12-10 03:02:37.897951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.889 [2024-12-10 03:02:38.021242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.150 Running I/O for 5 seconds... 00:14:46.073 80320.00 IOPS, 313.75 MiB/s [2024-12-10T03:02:41.395Z] 80192.00 IOPS, 313.25 MiB/s [2024-12-10T03:02:42.332Z] 81130.67 IOPS, 316.92 MiB/s [2024-12-10T03:02:43.712Z] 81984.00 IOPS, 320.25 MiB/s [2024-12-10T03:02:43.712Z] 85017.60 IOPS, 332.10 MiB/s 00:14:49.324 Latency(us) 00:14:49.324 [2024-12-10T03:02:43.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.324 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:49.324 xnvme_bdev : 5.00 84974.89 331.93 0.00 0.00 749.80 382.82 2734.87 00:14:49.324 [2024-12-10T03:02:43.712Z] =================================================================================================================== 00:14:49.324 [2024-12-10T03:02:43.712Z] Total : 84974.89 331.93 0.00 0.00 749.80 382.82 2734.87 00:14:49.585 03:02:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:49.585 03:02:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:49.585 03:02:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:49.585 03:02:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:49.585 03:02:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:49.585 { 00:14:49.585 "subsystems": [ 00:14:49.585 { 00:14:49.585 "subsystem": "bdev", 00:14:49.585 "config": [ 00:14:49.585 { 00:14:49.585 "params": { 00:14:49.585 "io_mechanism": "io_uring_cmd", 00:14:49.585 "conserve_cpu": true, 00:14:49.585 "filename": "/dev/ng0n1", 00:14:49.585 "name": "xnvme_bdev" 00:14:49.585 }, 00:14:49.585 "method": "bdev_xnvme_create" 00:14:49.585 }, 00:14:49.585 { 00:14:49.585 "method": "bdev_wait_for_examine" 00:14:49.585 } 00:14:49.585 ] 00:14:49.585 } 00:14:49.585 ] 00:14:49.585 } 00:14:49.585 [2024-12-10 03:02:43.927077] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:14:49.585 [2024-12-10 03:02:43.927186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71641 ] 00:14:49.845 [2024-12-10 03:02:44.085409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.845 [2024-12-10 03:02:44.162732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.102 Running I/O for 5 seconds... 00:14:51.994 13603.00 IOPS, 53.14 MiB/s [2024-12-10T03:02:47.766Z] 13603.50 IOPS, 53.14 MiB/s [2024-12-10T03:02:48.704Z] 16811.33 IOPS, 65.67 MiB/s [2024-12-10T03:02:49.647Z] 15273.50 IOPS, 59.66 MiB/s [2024-12-10T03:02:49.647Z] 16601.00 IOPS, 64.85 MiB/s 00:14:55.259 Latency(us) 00:14:55.259 [2024-12-10T03:02:49.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.259 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:55.259 xnvme_bdev : 5.04 16474.87 64.35 0.00 0.00 3875.90 75.22 500090.09 00:14:55.259 [2024-12-10T03:02:49.647Z] =================================================================================================================== 00:14:55.259 [2024-12-10T03:02:49.647Z] Total : 16474.87 64.35 0.00 0.00 3875.90 75.22 500090.09 00:14:55.830 00:14:55.830 real 0m25.334s 00:14:55.830 user 0m19.147s 00:14:55.830 sys 0m4.839s 00:14:55.830 03:02:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.830 03:02:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:55.830 ************************************ 00:14:55.830 END TEST xnvme_bdevperf 00:14:55.830 ************************************ 00:14:55.830 03:02:50 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:55.830 03:02:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:55.830 03:02:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.830 03:02:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:55.830 ************************************ 00:14:55.830 START TEST xnvme_fio_plugin 00:14:55.830 ************************************ 00:14:55.830 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:55.830 03:02:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:55.830 03:02:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:55.830 03:02:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:55.830 03:02:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:55.830 03:02:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:55.830 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:55.830 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:55.830 03:02:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:55.830 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:55.830 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:55.830 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:55.830 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:55.831 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:55.831 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:55.831 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:55.831 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:55.831 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:55.831 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:56.090 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:56.090 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:56.090 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:56.090 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:56.090 03:02:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:56.090 { 00:14:56.090 "subsystems": [ 00:14:56.090 { 00:14:56.090 "subsystem": "bdev", 00:14:56.090 "config": [ 00:14:56.090 { 00:14:56.090 "params": { 00:14:56.090 "io_mechanism": "io_uring_cmd", 00:14:56.090 "conserve_cpu": true, 00:14:56.090 "filename": "/dev/ng0n1", 00:14:56.090 "name": "xnvme_bdev" 00:14:56.090 }, 00:14:56.090 "method": "bdev_xnvme_create" 00:14:56.090 }, 00:14:56.090 { 00:14:56.090 "method": "bdev_wait_for_examine" 00:14:56.090 } 00:14:56.090 ] 00:14:56.090 } 00:14:56.090 ] 00:14:56.090 } 00:14:56.090 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:56.090 fio-3.35 00:14:56.090 Starting 1 thread 00:15:02.708 00:15:02.708 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71754: Tue Dec 10 03:02:56 2024 00:15:02.708 read: IOPS=33.7k, BW=131MiB/s (138MB/s)(658MiB/5002msec) 00:15:02.708 slat (nsec): min=2879, max=71809, avg=3767.33, stdev=2123.65 00:15:02.708 clat (usec): min=908, max=3830, avg=1748.52, stdev=318.01 00:15:02.708 lat (usec): min=911, max=3845, avg=1752.29, stdev=318.36 00:15:02.708 clat percentiles (usec): 00:15:02.708 | 1.00th=[ 1172], 5.00th=[ 1303], 10.00th=[ 1385], 20.00th=[ 1483], 00:15:02.708 | 30.00th=[ 1565], 40.00th=[ 1631], 50.00th=[ 1713], 60.00th=[ 1795], 00:15:02.708 | 70.00th=[ 1876], 80.00th=[ 1991], 90.00th=[ 2147], 95.00th=[ 2311], 00:15:02.708 | 99.00th=[ 2671], 99.50th=[ 2835], 99.90th=[ 3556], 99.95th=[ 3687], 00:15:02.708 | 99.99th=[ 3785] 00:15:02.708 bw ( KiB/s): min=131072, max=140288, per=99.79%, avg=134371.56, stdev=2908.85, samples=9 00:15:02.708 iops : min=32768, max=35072, avg=33592.89, stdev=727.21, samples=9 00:15:02.708 lat (usec) : 1000=0.04% 00:15:02.708 lat (msec) : 2=80.70%, 4=19.26% 00:15:02.708 cpu : usr=57.01%, sys=39.75%, ctx=14, majf=0, minf=762 00:15:02.708 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:02.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.708 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:02.708 issued rwts: total=168384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.708 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:02.708 00:15:02.708 Run status group 0 (all jobs): 00:15:02.708 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=658MiB (690MB), run=5002-5002msec 00:15:02.708 ----------------------------------------------------- 00:15:02.708 Suppressions used: 00:15:02.708 count bytes template 00:15:02.708 1 11 /usr/src/fio/parse.c 00:15:02.708 1 8 libtcmalloc_minimal.so 00:15:02.708 1 904 libcrypto.so 00:15:02.708 ----------------------------------------------------- 00:15:02.708 00:15:02.708 03:02:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:02.709 03:02:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:02.709 { 00:15:02.709 "subsystems": [ 00:15:02.709 { 00:15:02.709 "subsystem": "bdev", 00:15:02.709 "config": [ 00:15:02.709 { 00:15:02.709 "params": { 00:15:02.709 "io_mechanism": "io_uring_cmd", 00:15:02.709 "conserve_cpu": true, 00:15:02.709 "filename": "/dev/ng0n1", 00:15:02.709 "name": "xnvme_bdev" 00:15:02.709 }, 00:15:02.709 "method": "bdev_xnvme_create" 00:15:02.709 }, 00:15:02.709 { 00:15:02.709 "method": "bdev_wait_for_examine" 00:15:02.709 } 00:15:02.709 ] 00:15:02.709 } 00:15:02.709 ] 00:15:02.709 } 00:15:02.971 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:02.971 fio-3.35 00:15:02.971 Starting 1 thread 00:15:09.563 00:15:09.563 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71849: Tue Dec 10 03:03:02 2024 00:15:09.563 write: IOPS=24.5k, BW=95.7MiB/s (100MB/s)(478MiB/5001msec); 0 zone resets 00:15:09.563 slat (usec): min=2, max=341, avg= 3.91, stdev= 2.56 00:15:09.563 clat (usec): min=66, max=25297, avg=2477.10, stdev=3482.55 00:15:09.563 lat (usec): min=70, max=25300, avg=2481.01, stdev=3482.68 00:15:09.563 clat percentiles (usec): 00:15:09.563 | 1.00th=[ 326], 5.00th=[ 750], 10.00th=[ 1188], 20.00th=[ 1352], 00:15:09.563 | 30.00th=[ 1467], 40.00th=[ 1549], 50.00th=[ 1631], 60.00th=[ 1729], 00:15:09.563 | 70.00th=[ 1827], 80.00th=[ 1975], 90.00th=[ 2278], 95.00th=[13566], 00:15:09.563 | 99.00th=[18220], 99.50th=[19268], 99.90th=[21103], 99.95th=[21627], 00:15:09.563 | 99.99th=[23725] 00:15:09.563 bw ( KiB/s): min=41096, max=142862, per=94.33%, avg=92409.22, stdev=45033.70, samples=9 00:15:09.563 iops : min=10274, max=35715, avg=23102.22, stdev=11258.39, samples=9 00:15:09.563 lat (usec) : 100=0.03%, 250=0.62%, 500=1.83%, 750=2.46%, 1000=1.82% 00:15:09.563 lat (msec) : 2=74.91%, 4=11.94%, 10=0.16%, 20=5.93%, 50=0.29% 00:15:09.563 cpu : usr=68.74%, sys=26.54%, ctx=19, majf=0, minf=762 00:15:09.563 IO depths : 1=1.3%, 2=2.6%, 4=5.3%, 8=10.7%, 16=21.5%, 32=54.1%, >=64=4.4% 00:15:09.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.563 complete : 0=0.0%, 4=97.7%, 8=0.6%, 16=0.3%, 32=0.1%, 64=1.3%, >=64=0.0% 00:15:09.563 issued rwts: total=0,122475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:09.563 00:15:09.563 Run status group 0 (all jobs): 00:15:09.563 WRITE: bw=95.7MiB/s (100MB/s), 95.7MiB/s-95.7MiB/s (100MB/s-100MB/s), io=478MiB (502MB), run=5001-5001msec 00:15:09.563 ----------------------------------------------------- 00:15:09.563 Suppressions used: 00:15:09.563 count bytes template 00:15:09.563 1 11 /usr/src/fio/parse.c 00:15:09.563 1 8 libtcmalloc_minimal.so 00:15:09.563 1 904 libcrypto.so 00:15:09.563 ----------------------------------------------------- 00:15:09.563 00:15:09.563 00:15:09.563 real 0m13.726s 00:15:09.563 user 0m9.137s 00:15:09.563 sys 0m3.869s 00:15:09.563 03:03:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.563 03:03:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:09.563 ************************************ 00:15:09.563 END TEST xnvme_fio_plugin 00:15:09.563 ************************************ 00:15:09.824 Process with pid 71348 is not found 00:15:09.824 03:03:03 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71348 00:15:09.824 03:03:03 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71348 ']' 00:15:09.824 03:03:03 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71348 00:15:09.824 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71348) - No such process 00:15:09.824 03:03:03 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71348 is not found' 00:15:09.824 00:15:09.824 real 3m30.910s 00:15:09.824 user 2m7.179s 00:15:09.824 sys 1m11.035s 00:15:09.824 ************************************ 00:15:09.824 END TEST nvme_xnvme 00:15:09.824 ************************************ 00:15:09.824 03:03:03 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.824 03:03:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.824 03:03:04 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:09.824 03:03:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:09.824 03:03:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.824 03:03:04 -- common/autotest_common.sh@10 -- # set +x 00:15:09.824 ************************************ 00:15:09.824 START TEST blockdev_xnvme 00:15:09.824 ************************************ 00:15:09.824 03:03:04 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:09.824 * Looking for test storage... 00:15:09.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:09.824 03:03:04 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:09.824 03:03:04 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:09.824 03:03:04 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:09.824 03:03:04 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:09.824 03:03:04 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:09.824 03:03:04 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.824 03:03:04 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:09.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.824 --rc genhtml_branch_coverage=1 00:15:09.824 --rc genhtml_function_coverage=1 00:15:09.824 --rc genhtml_legend=1 00:15:09.824 --rc geninfo_all_blocks=1 00:15:09.824 --rc geninfo_unexecuted_blocks=1 00:15:09.824 00:15:09.824 ' 00:15:09.824 03:03:04 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:09.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.824 --rc genhtml_branch_coverage=1 00:15:09.824 --rc genhtml_function_coverage=1 00:15:09.824 --rc genhtml_legend=1 00:15:09.824 --rc geninfo_all_blocks=1 00:15:09.825 --rc geninfo_unexecuted_blocks=1 00:15:09.825 00:15:09.825 ' 00:15:09.825 03:03:04 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:09.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.825 --rc genhtml_branch_coverage=1 00:15:09.825 --rc genhtml_function_coverage=1 00:15:09.825 --rc genhtml_legend=1 00:15:09.825 --rc geninfo_all_blocks=1 00:15:09.825 --rc geninfo_unexecuted_blocks=1 00:15:09.825 00:15:09.825 ' 00:15:09.825 03:03:04 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:09.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.825 --rc genhtml_branch_coverage=1 00:15:09.825 --rc genhtml_function_coverage=1 00:15:09.825 --rc genhtml_legend=1 00:15:09.825 --rc geninfo_all_blocks=1 00:15:09.825 --rc geninfo_unexecuted_blocks=1 00:15:09.825 00:15:09.825 ' 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71979 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71979 00:15:09.825 03:03:04 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71979 ']' 00:15:09.825 03:03:04 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:09.825 03:03:04 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.825 03:03:04 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.825 03:03:04 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.825 03:03:04 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.825 03:03:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:10.087 [2024-12-10 03:03:04.277546] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:15:10.087 [2024-12-10 03:03:04.277920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71979 ] 00:15:10.087 [2024-12-10 03:03:04.443255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.349 [2024-12-10 03:03:04.573352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.925 03:03:05 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.925 03:03:05 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:10.925 03:03:05 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:10.925 03:03:05 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:10.925 03:03:05 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:10.926 03:03:05 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:10.926 03:03:05 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:11.497 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:12.117 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:12.117 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:12.117 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:12.117 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:12.117 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.117 03:03:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.118 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:12.118 nvme0n1 00:15:12.118 nvme0n2 00:15:12.118 nvme0n3 00:15:12.118 nvme1n1 00:15:12.118 nvme2n1 00:15:12.118 nvme3n1 00:15:12.118 03:03:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.118 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:12.118 03:03:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.118 03:03:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.118 03:03:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.118 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:12.118 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:12.118 03:03:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.118 03:03:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.118 03:03:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.118 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:12.118 03:03:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.118 03:03:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.118 03:03:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.118 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:12.118 03:03:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.118 03:03:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.118 03:03:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.118 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:12.118 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:12.118 03:03:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.118 03:03:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.118 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:12.379 03:03:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.379 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:12.379 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:12.380 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "821678ac-475d-4cd0-aca4-5efa7e5d7617"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "821678ac-475d-4cd0-aca4-5efa7e5d7617",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "05574f2a-28da-497f-b1a6-3922ca2c6d3f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "05574f2a-28da-497f-b1a6-3922ca2c6d3f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "52c1182a-3e0e-4c5b-8abd-65fc8bd9eec5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "52c1182a-3e0e-4c5b-8abd-65fc8bd9eec5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "93043cf3-94f9-4a40-be56-7f16725101a1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "93043cf3-94f9-4a40-be56-7f16725101a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a8b362ff-0a6d-4f82-8497-bdc098be484f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a8b362ff-0a6d-4f82-8497-bdc098be484f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ae1e3922-80bc-43ef-9cf4-4d45eaf9669a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ae1e3922-80bc-43ef-9cf4-4d45eaf9669a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:12.380 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:12.380 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:12.380 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:12.380 03:03:06 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 71979 00:15:12.380 03:03:06 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71979 ']' 00:15:12.380 03:03:06 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71979 00:15:12.380 03:03:06 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:12.380 03:03:06 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.380 03:03:06 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71979 00:15:12.380 killing process with pid 71979 00:15:12.380 03:03:06 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.380 03:03:06 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.380 03:03:06 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71979' 00:15:12.380 03:03:06 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71979 00:15:12.380 03:03:06 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71979 00:15:14.298 03:03:08 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:14.298 03:03:08 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:14.298 03:03:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:14.298 03:03:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.298 03:03:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.298 ************************************ 00:15:14.298 START TEST bdev_hello_world 00:15:14.298 ************************************ 00:15:14.298 03:03:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:14.298 [2024-12-10 03:03:08.303063] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:15:14.298 [2024-12-10 03:03:08.303210] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72264 ] 00:15:14.298 [2024-12-10 03:03:08.466825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.298 [2024-12-10 03:03:08.590053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.871 [2024-12-10 03:03:08.985559] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:14.872 [2024-12-10 03:03:08.985614] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:14.872 [2024-12-10 03:03:08.985632] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:14.872 [2024-12-10 03:03:08.987758] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:14.872 [2024-12-10 03:03:08.988528] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:14.872 [2024-12-10 03:03:08.988574] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:14.872 [2024-12-10 03:03:08.989423] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:14.872 00:15:14.872 [2024-12-10 03:03:08.989462] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:15.444 00:15:15.444 real 0m1.546s 00:15:15.444 user 0m1.178s 00:15:15.444 sys 0m0.225s 00:15:15.444 03:03:09 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.444 ************************************ 00:15:15.444 END TEST bdev_hello_world 00:15:15.444 ************************************ 00:15:15.444 03:03:09 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:15.706 03:03:09 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:15.706 03:03:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:15.706 03:03:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.706 03:03:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.706 ************************************ 00:15:15.706 START TEST bdev_bounds 00:15:15.706 ************************************ 00:15:15.706 03:03:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:15.706 03:03:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72301 00:15:15.706 Process bdevio pid: 72301 00:15:15.706 03:03:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:15.706 03:03:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72301' 00:15:15.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.706 03:03:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72301 00:15:15.706 03:03:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72301 ']' 00:15:15.706 03:03:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.706 03:03:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:15.706 03:03:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.706 03:03:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.706 03:03:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.706 03:03:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:15.706 [2024-12-10 03:03:09.916083] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:15:15.706 [2024-12-10 03:03:09.916472] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72301 ] 00:15:15.706 [2024-12-10 03:03:10.083761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:15.965 [2024-12-10 03:03:10.215923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.965 [2024-12-10 03:03:10.216226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.965 [2024-12-10 03:03:10.216228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.537 03:03:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.537 03:03:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:16.537 03:03:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:16.537 I/O targets: 00:15:16.537 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:16.537 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:16.537 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:16.537 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:16.537 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:16.537 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:16.537 00:15:16.537 00:15:16.537 CUnit - A unit testing framework for C - Version 2.1-3 00:15:16.537 http://cunit.sourceforge.net/ 00:15:16.537 00:15:16.537 00:15:16.537 Suite: bdevio tests on: nvme3n1 00:15:16.537 Test: blockdev write read block ...passed 00:15:16.537 Test: blockdev write zeroes read block ...passed 00:15:16.537 Test: blockdev write zeroes read no split ...passed 00:15:16.537 Test: blockdev write zeroes read split ...passed 00:15:16.798 Test: blockdev write zeroes read split partial ...passed 00:15:16.798 Test: blockdev reset ...passed 00:15:16.798 Test: blockdev write read 8 blocks ...passed 00:15:16.798 Test: blockdev write read size > 128k ...passed 00:15:16.798 Test: blockdev write read invalid size ...passed 00:15:16.798 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:16.798 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:16.798 Test: blockdev write read max offset ...passed 00:15:16.798 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:16.798 Test: blockdev writev readv 8 blocks ...passed 00:15:16.798 Test: blockdev writev readv 30 x 1block ...passed 00:15:16.798 Test: blockdev writev readv block ...passed 00:15:16.798 Test: blockdev writev readv size > 128k ...passed 00:15:16.798 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:16.798 Test: blockdev comparev and writev ...passed 00:15:16.798 Test: blockdev nvme passthru rw ...passed 00:15:16.798 Test: blockdev nvme passthru vendor specific ...passed 00:15:16.798 Test: blockdev nvme admin passthru ...passed 00:15:16.798 Test: blockdev copy ...passed 00:15:16.798 Suite: bdevio tests on: nvme2n1 00:15:16.798 Test: blockdev write read block ...passed 00:15:16.798 Test: blockdev write zeroes read block ...passed 00:15:16.798 Test: blockdev write zeroes read no split ...passed 00:15:16.798 Test: blockdev write zeroes read split ...passed 00:15:16.798 Test: blockdev write zeroes read split partial ...passed 00:15:16.798 Test: blockdev reset ...passed 00:15:16.798 Test: blockdev write read 8 blocks ...passed 00:15:16.798 Test: blockdev write read size > 128k ...passed 00:15:16.798 Test: blockdev write read invalid size ...passed 00:15:16.798 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:16.798 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:16.798 Test: blockdev write read max offset ...passed 00:15:16.798 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:16.798 Test: blockdev writev readv 8 blocks ...passed 00:15:16.798 Test: blockdev writev readv 30 x 1block ...passed 00:15:16.798 Test: blockdev writev readv block ...passed 00:15:16.798 Test: blockdev writev readv size > 128k ...passed 00:15:16.798 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:16.798 Test: blockdev comparev and writev ...passed 00:15:16.798 Test: blockdev nvme passthru rw ...passed 00:15:16.798 Test: blockdev nvme passthru vendor specific ...passed 00:15:16.798 Test: blockdev nvme admin passthru ...passed 00:15:16.798 Test: blockdev copy ...passed 00:15:16.798 Suite: bdevio tests on: nvme1n1 00:15:16.798 Test: blockdev write read block ...passed 00:15:16.798 Test: blockdev write zeroes read block ...passed 00:15:16.798 Test: blockdev write zeroes read no split ...passed 00:15:16.798 Test: blockdev write zeroes read split ...passed 00:15:16.798 Test: blockdev write zeroes read split partial ...passed 00:15:16.798 Test: blockdev reset ...passed 00:15:16.798 Test: blockdev write read 8 blocks ...passed 00:15:16.798 Test: blockdev write read size > 128k ...passed 00:15:16.798 Test: blockdev write read invalid size ...passed 00:15:16.798 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:16.798 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:16.798 Test: blockdev write read max offset ...passed 00:15:16.798 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:16.798 Test: blockdev writev readv 8 blocks ...passed 00:15:16.798 Test: blockdev writev readv 30 x 1block ...passed 00:15:16.798 Test: blockdev writev readv block ...passed 00:15:16.798 Test: blockdev writev readv size > 128k ...passed 00:15:16.798 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:16.798 Test: blockdev comparev and writev ...passed 00:15:16.798 Test: blockdev nvme passthru rw ...passed 00:15:16.798 Test: blockdev nvme passthru vendor specific ...passed 00:15:16.798 Test: blockdev nvme admin passthru ...passed 00:15:16.798 Test: blockdev copy ...passed 00:15:16.798 Suite: bdevio tests on: nvme0n3 00:15:16.798 Test: blockdev write read block ...passed 00:15:16.798 Test: blockdev write zeroes read block ...passed 00:15:16.798 Test: blockdev write zeroes read no split ...passed 00:15:16.798 Test: blockdev write zeroes read split ...passed 00:15:17.059 Test: blockdev write zeroes read split partial ...passed 00:15:17.060 Test: blockdev reset ...passed 00:15:17.060 Test: blockdev write read 8 blocks ...passed 00:15:17.060 Test: blockdev write read size > 128k ...passed 00:15:17.060 Test: blockdev write read invalid size ...passed 00:15:17.060 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:17.060 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:17.060 Test: blockdev write read max offset ...passed 00:15:17.060 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:17.060 Test: blockdev writev readv 8 blocks ...passed 00:15:17.060 Test: blockdev writev readv 30 x 1block ...passed 00:15:17.060 Test: blockdev writev readv block ...passed 00:15:17.060 Test: blockdev writev readv size > 128k ...passed 00:15:17.060 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:17.060 Test: blockdev comparev and writev ...passed 00:15:17.060 Test: blockdev nvme passthru rw ...passed 00:15:17.060 Test: blockdev nvme passthru vendor specific ...passed 00:15:17.060 Test: blockdev nvme admin passthru ...passed 00:15:17.060 Test: blockdev copy ...passed 00:15:17.060 Suite: bdevio tests on: nvme0n2 00:15:17.060 Test: blockdev write read block ...passed 00:15:17.060 Test: blockdev write zeroes read block ...passed 00:15:17.060 Test: blockdev write zeroes read no split ...passed 00:15:17.060 Test: blockdev write zeroes read split ...passed 00:15:17.060 Test: blockdev write zeroes read split partial ...passed 00:15:17.060 Test: blockdev reset ...passed 00:15:17.060 Test: blockdev write read 8 blocks ...passed 00:15:17.060 Test: blockdev write read size > 128k ...passed 00:15:17.060 Test: blockdev write read invalid size ...passed 00:15:17.060 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:17.060 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:17.060 Test: blockdev write read max offset ...passed 00:15:17.060 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:17.060 Test: blockdev writev readv 8 blocks ...passed 00:15:17.060 Test: blockdev writev readv 30 x 1block ...passed 00:15:17.060 Test: blockdev writev readv block ...passed 00:15:17.060 Test: blockdev writev readv size > 128k ...passed 00:15:17.060 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:17.060 Test: blockdev comparev and writev ...passed 00:15:17.060 Test: blockdev nvme passthru rw ...passed 00:15:17.060 Test: blockdev nvme passthru vendor specific ...passed 00:15:17.060 Test: blockdev nvme admin passthru ...passed 00:15:17.060 Test: blockdev copy ...passed 00:15:17.060 Suite: bdevio tests on: nvme0n1 00:15:17.060 Test: blockdev write read block ...passed 00:15:17.060 Test: blockdev write zeroes read block ...passed 00:15:17.060 Test: blockdev write zeroes read no split ...passed 00:15:17.060 Test: blockdev write zeroes read split ...passed 00:15:17.060 Test: blockdev write zeroes read split partial ...passed 00:15:17.060 Test: blockdev reset ...passed 00:15:17.060 Test: blockdev write read 8 blocks ...passed 00:15:17.060 Test: blockdev write read size > 128k ...passed 00:15:17.060 Test: blockdev write read invalid size ...passed 00:15:17.060 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:17.060 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:17.060 Test: blockdev write read max offset ...passed 00:15:17.060 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:17.060 Test: blockdev writev readv 8 blocks ...passed 00:15:17.060 Test: blockdev writev readv 30 x 1block ...passed 00:15:17.060 Test: blockdev writev readv block ...passed 00:15:17.060 Test: blockdev writev readv size > 128k ...passed 00:15:17.060 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:17.060 Test: blockdev comparev and writev ...passed 00:15:17.060 Test: blockdev nvme passthru rw ...passed 00:15:17.060 Test: blockdev nvme passthru vendor specific ...passed 00:15:17.060 Test: blockdev nvme admin passthru ...passed 00:15:17.060 Test: blockdev copy ...passed 00:15:17.060 00:15:17.060 Run Summary: Type Total Ran Passed Failed Inactive 00:15:17.060 suites 6 6 n/a 0 0 00:15:17.060 tests 138 138 138 0 0 00:15:17.060 asserts 780 780 780 0 n/a 00:15:17.060 00:15:17.060 Elapsed time = 1.274 seconds 00:15:17.060 0 00:15:17.060 03:03:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72301 00:15:17.060 03:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72301 ']' 00:15:17.060 03:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72301 00:15:17.060 03:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:17.060 03:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.060 03:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72301 00:15:17.060 03:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.060 03:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.060 03:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72301' 00:15:17.060 killing process with pid 72301 00:15:17.060 03:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72301 00:15:17.060 03:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72301 00:15:18.005 03:03:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:18.005 00:15:18.005 real 0m2.356s 00:15:18.005 user 0m5.679s 00:15:18.005 sys 0m0.367s 00:15:18.005 03:03:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.005 03:03:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:18.005 ************************************ 00:15:18.005 END TEST bdev_bounds 00:15:18.005 ************************************ 00:15:18.005 03:03:12 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:18.005 03:03:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:18.005 03:03:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.005 03:03:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:18.005 ************************************ 00:15:18.005 START TEST bdev_nbd 00:15:18.005 ************************************ 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72359 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72359 /var/tmp/spdk-nbd.sock 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72359 ']' 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:18.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.005 03:03:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:18.005 [2024-12-10 03:03:12.352599] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:15:18.005 [2024-12-10 03:03:12.352734] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.267 [2024-12-10 03:03:12.513949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.267 [2024-12-10 03:03:12.633396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.838 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.838 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:18.838 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:18.838 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:18.838 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:18.838 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:18.838 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:18.838 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:18.838 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:18.838 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:18.838 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:18.838 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:18.838 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:18.838 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:18.838 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:19.100 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:19.100 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:19.100 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:19.100 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:19.100 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:19.100 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.100 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.100 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:19.100 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:19.100 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.100 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.100 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.100 1+0 records in 00:15:19.100 1+0 records out 00:15:19.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110062 s, 3.7 MB/s 00:15:19.100 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.100 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:19.100 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.361 1+0 records in 00:15:19.361 1+0 records out 00:15:19.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676525 s, 6.1 MB/s 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:19.361 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.626 1+0 records in 00:15:19.626 1+0 records out 00:15:19.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129207 s, 3.2 MB/s 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:19.626 03:03:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.920 1+0 records in 00:15:19.920 1+0 records out 00:15:19.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078861 s, 5.2 MB/s 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:19.920 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:19.921 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.182 1+0 records in 00:15:20.182 1+0 records out 00:15:20.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00142027 s, 2.9 MB/s 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:20.182 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.443 1+0 records in 00:15:20.443 1+0 records out 00:15:20.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129068 s, 3.2 MB/s 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:20.443 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:20.702 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:20.702 { 00:15:20.702 "nbd_device": "/dev/nbd0", 00:15:20.702 "bdev_name": "nvme0n1" 00:15:20.702 }, 00:15:20.702 { 00:15:20.702 "nbd_device": "/dev/nbd1", 00:15:20.702 "bdev_name": "nvme0n2" 00:15:20.702 }, 00:15:20.702 { 00:15:20.702 "nbd_device": "/dev/nbd2", 00:15:20.702 "bdev_name": "nvme0n3" 00:15:20.702 }, 00:15:20.702 { 00:15:20.702 "nbd_device": "/dev/nbd3", 00:15:20.702 "bdev_name": "nvme1n1" 00:15:20.702 }, 00:15:20.702 { 00:15:20.702 "nbd_device": "/dev/nbd4", 00:15:20.702 "bdev_name": "nvme2n1" 00:15:20.702 }, 00:15:20.702 { 00:15:20.702 "nbd_device": "/dev/nbd5", 00:15:20.702 "bdev_name": "nvme3n1" 00:15:20.702 } 00:15:20.702 ]' 00:15:20.702 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:20.702 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:20.702 { 00:15:20.702 "nbd_device": "/dev/nbd0", 00:15:20.702 "bdev_name": "nvme0n1" 00:15:20.702 }, 00:15:20.702 { 00:15:20.702 "nbd_device": "/dev/nbd1", 00:15:20.702 "bdev_name": "nvme0n2" 00:15:20.702 }, 00:15:20.702 { 00:15:20.702 "nbd_device": "/dev/nbd2", 00:15:20.702 "bdev_name": "nvme0n3" 00:15:20.702 }, 00:15:20.702 { 00:15:20.702 "nbd_device": "/dev/nbd3", 00:15:20.702 "bdev_name": "nvme1n1" 00:15:20.702 }, 00:15:20.702 { 00:15:20.702 "nbd_device": "/dev/nbd4", 00:15:20.702 "bdev_name": "nvme2n1" 00:15:20.702 }, 00:15:20.702 { 00:15:20.702 "nbd_device": "/dev/nbd5", 00:15:20.702 "bdev_name": "nvme3n1" 00:15:20.702 } 00:15:20.702 ]' 00:15:20.702 03:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:20.702 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:20.702 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.702 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:20.702 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.702 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:20.702 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.702 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:20.960 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.960 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.960 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.960 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.960 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.960 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.960 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:20.960 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.960 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.960 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:21.220 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:21.221 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:21.221 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:21.221 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.221 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.221 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:21.221 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:21.221 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.221 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.221 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:21.482 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:21.482 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:21.482 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:21.482 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.482 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.482 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:21.482 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:21.482 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.482 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.482 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:21.743 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:21.743 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:21.743 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:21.743 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.743 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.743 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:21.743 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:21.743 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.743 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.743 03:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:21.743 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:21.743 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:21.743 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:21.743 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.743 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.743 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:21.743 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:21.743 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.743 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.743 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:22.004 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:22.004 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:22.004 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:22.004 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.004 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.004 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:22.004 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:22.004 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.004 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:22.004 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:22.005 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:22.266 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:22.528 /dev/nbd0 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.528 1+0 records in 00:15:22.528 1+0 records out 00:15:22.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331928 s, 12.3 MB/s 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:22.528 03:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:22.789 /dev/nbd1 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.789 1+0 records in 00:15:22.789 1+0 records out 00:15:22.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039996 s, 10.2 MB/s 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:22.789 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:23.051 /dev/nbd10 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.051 1+0 records in 00:15:23.051 1+0 records out 00:15:23.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367739 s, 11.1 MB/s 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:23.051 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:23.312 /dev/nbd11 00:15:23.312 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.313 1+0 records in 00:15:23.313 1+0 records out 00:15:23.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292815 s, 14.0 MB/s 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:23.313 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:23.574 /dev/nbd12 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.574 1+0 records in 00:15:23.574 1+0 records out 00:15:23.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490044 s, 8.4 MB/s 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.574 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:23.575 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:23.575 /dev/nbd13 00:15:23.575 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:23.575 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:23.575 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:23.575 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:23.575 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:23.575 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:23.575 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:23.575 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:23.575 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:23.575 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:23.575 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.575 1+0 records in 00:15:23.575 1+0 records out 00:15:23.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033076 s, 12.4 MB/s 00:15:23.836 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.836 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:23.836 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.837 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:23.837 03:03:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:23.837 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.837 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:23.837 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:23.837 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:23.837 03:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:23.837 { 00:15:23.837 "nbd_device": "/dev/nbd0", 00:15:23.837 "bdev_name": "nvme0n1" 00:15:23.837 }, 00:15:23.837 { 00:15:23.837 "nbd_device": "/dev/nbd1", 00:15:23.837 "bdev_name": "nvme0n2" 00:15:23.837 }, 00:15:23.837 { 00:15:23.837 "nbd_device": "/dev/nbd10", 00:15:23.837 "bdev_name": "nvme0n3" 00:15:23.837 }, 00:15:23.837 { 00:15:23.837 "nbd_device": "/dev/nbd11", 00:15:23.837 "bdev_name": "nvme1n1" 00:15:23.837 }, 00:15:23.837 { 00:15:23.837 "nbd_device": "/dev/nbd12", 00:15:23.837 "bdev_name": "nvme2n1" 00:15:23.837 }, 00:15:23.837 { 00:15:23.837 "nbd_device": "/dev/nbd13", 00:15:23.837 "bdev_name": "nvme3n1" 00:15:23.837 } 00:15:23.837 ]' 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:23.837 { 00:15:23.837 "nbd_device": "/dev/nbd0", 00:15:23.837 "bdev_name": "nvme0n1" 00:15:23.837 }, 00:15:23.837 { 00:15:23.837 "nbd_device": "/dev/nbd1", 00:15:23.837 "bdev_name": "nvme0n2" 00:15:23.837 }, 00:15:23.837 { 00:15:23.837 "nbd_device": "/dev/nbd10", 00:15:23.837 "bdev_name": "nvme0n3" 00:15:23.837 }, 00:15:23.837 { 00:15:23.837 "nbd_device": "/dev/nbd11", 00:15:23.837 "bdev_name": "nvme1n1" 00:15:23.837 }, 00:15:23.837 { 00:15:23.837 "nbd_device": "/dev/nbd12", 00:15:23.837 "bdev_name": "nvme2n1" 00:15:23.837 }, 00:15:23.837 { 00:15:23.837 "nbd_device": "/dev/nbd13", 00:15:23.837 "bdev_name": "nvme3n1" 00:15:23.837 } 00:15:23.837 ]' 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:23.837 /dev/nbd1 00:15:23.837 /dev/nbd10 00:15:23.837 /dev/nbd11 00:15:23.837 /dev/nbd12 00:15:23.837 /dev/nbd13' 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:23.837 /dev/nbd1 00:15:23.837 /dev/nbd10 00:15:23.837 /dev/nbd11 00:15:23.837 /dev/nbd12 00:15:23.837 /dev/nbd13' 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:23.837 256+0 records in 00:15:23.837 256+0 records out 00:15:23.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00841259 s, 125 MB/s 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:23.837 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:24.097 256+0 records in 00:15:24.097 256+0 records out 00:15:24.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0755777 s, 13.9 MB/s 00:15:24.097 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:24.097 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:24.359 256+0 records in 00:15:24.359 256+0 records out 00:15:24.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.208639 s, 5.0 MB/s 00:15:24.359 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:24.359 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:24.359 256+0 records in 00:15:24.359 256+0 records out 00:15:24.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189645 s, 5.5 MB/s 00:15:24.359 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:24.359 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:24.620 256+0 records in 00:15:24.620 256+0 records out 00:15:24.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.227493 s, 4.6 MB/s 00:15:24.620 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:24.621 03:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:24.881 256+0 records in 00:15:24.881 256+0 records out 00:15:24.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.28752 s, 3.6 MB/s 00:15:24.881 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:24.881 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:25.143 256+0 records in 00:15:25.143 256+0 records out 00:15:25.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.241219 s, 4.3 MB/s 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:25.143 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.404 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:25.665 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:25.665 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:25.665 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:25.665 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.665 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.665 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:25.665 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:25.665 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.665 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.665 03:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:25.927 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:25.927 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:25.927 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:25.927 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.927 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.927 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:25.927 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:25.927 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.927 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.927 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:26.189 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:26.189 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:26.189 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:26.189 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.189 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.189 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:26.189 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.189 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.189 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.189 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:26.451 03:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:26.712 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:26.973 malloc_lvol_verify 00:15:26.973 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:27.234 a4e76c98-0525-4371-8eec-fae7b861db0f 00:15:27.234 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:27.495 b87658c9-ff77-43a2-b04c-dd9c80198351 00:15:27.495 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:27.495 /dev/nbd0 00:15:27.495 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:27.495 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:27.495 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:27.495 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:27.495 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:27.495 mke2fs 1.47.0 (5-Feb-2023) 00:15:27.495 Discarding device blocks: 0/4096 done 00:15:27.495 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:27.495 00:15:27.495 Allocating group tables: 0/1 done 00:15:27.495 Writing inode tables: 0/1 done 00:15:27.495 Creating journal (1024 blocks): done 00:15:27.495 Writing superblocks and filesystem accounting information: 0/1 done 00:15:27.495 00:15:27.495 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:27.495 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:27.495 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:27.495 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:27.495 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:27.495 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:27.495 03:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:27.756 03:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72359 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72359 ']' 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72359 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72359 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.757 killing process with pid 72359 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72359' 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72359 00:15:27.757 03:03:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72359 00:15:28.327 03:03:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:28.327 00:15:28.327 real 0m10.400s 00:15:28.327 user 0m14.062s 00:15:28.327 sys 0m3.580s 00:15:28.327 03:03:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.327 ************************************ 00:15:28.327 END TEST bdev_nbd 00:15:28.327 ************************************ 00:15:28.327 03:03:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:28.590 03:03:22 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:28.590 03:03:22 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:28.590 03:03:22 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:28.590 03:03:22 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:28.590 03:03:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:28.590 03:03:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.590 03:03:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.590 ************************************ 00:15:28.590 START TEST bdev_fio 00:15:28.590 ************************************ 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:28.590 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:28.590 ************************************ 00:15:28.590 START TEST bdev_fio_rw_verify 00:15:28.590 ************************************ 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:28.590 03:03:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:28.851 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:28.851 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:28.851 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:28.851 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:28.851 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:28.851 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:28.851 fio-3.35 00:15:28.851 Starting 6 threads 00:15:41.083 00:15:41.083 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72762: Tue Dec 10 03:03:33 2024 00:15:41.083 read: IOPS=24.4k, BW=95.4MiB/s (100MB/s)(954MiB/10003msec) 00:15:41.083 slat (usec): min=2, max=2173, avg= 5.54, stdev=13.93 00:15:41.083 clat (usec): min=83, max=12562, avg=756.36, stdev=666.58 00:15:41.083 lat (usec): min=86, max=12584, avg=761.90, stdev=667.33 00:15:41.083 clat percentiles (usec): 00:15:41.083 | 50.000th=[ 498], 99.000th=[ 3097], 99.900th=[ 4555], 99.990th=[ 5997], 00:15:41.083 | 99.999th=[12518] 00:15:41.083 write: IOPS=24.8k, BW=96.8MiB/s (102MB/s)(968MiB/10003msec); 0 zone resets 00:15:41.083 slat (usec): min=4, max=4524, avg=31.08, stdev=102.86 00:15:41.083 clat (usec): min=68, max=8814, avg=923.38, stdev=741.34 00:15:41.083 lat (usec): min=87, max=8830, avg=954.46, stdev=755.41 00:15:41.083 clat percentiles (usec): 00:15:41.083 | 50.000th=[ 635], 99.000th=[ 3458], 99.900th=[ 4883], 99.990th=[ 6259], 00:15:41.083 | 99.999th=[ 8094] 00:15:41.083 bw ( KiB/s): min=49190, max=192778, per=100.00%, avg=101116.58, stdev=6766.31, samples=114 00:15:41.083 iops : min=12296, max=48193, avg=25277.95, stdev=1691.61, samples=114 00:15:41.083 lat (usec) : 100=0.03%, 250=11.67%, 500=31.65%, 750=18.34%, 1000=9.20% 00:15:41.083 lat (msec) : 2=21.37%, 4=7.42%, 10=0.31%, 20=0.01% 00:15:41.083 cpu : usr=43.59%, sys=32.35%, ctx=7318, majf=0, minf=21583 00:15:41.083 IO depths : 1=11.8%, 2=24.2%, 4=50.8%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:41.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.083 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.083 issued rwts: total=244263,247906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.083 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:41.083 00:15:41.083 Run status group 0 (all jobs): 00:15:41.083 READ: bw=95.4MiB/s (100MB/s), 95.4MiB/s-95.4MiB/s (100MB/s-100MB/s), io=954MiB (1001MB), run=10003-10003msec 00:15:41.083 WRITE: bw=96.8MiB/s (102MB/s), 96.8MiB/s-96.8MiB/s (102MB/s-102MB/s), io=968MiB (1015MB), run=10003-10003msec 00:15:41.083 ----------------------------------------------------- 00:15:41.083 Suppressions used: 00:15:41.083 count bytes template 00:15:41.083 6 48 /usr/src/fio/parse.c 00:15:41.083 3478 333888 /usr/src/fio/iolog.c 00:15:41.083 1 8 libtcmalloc_minimal.so 00:15:41.083 1 904 libcrypto.so 00:15:41.083 ----------------------------------------------------- 00:15:41.083 00:15:41.083 00:15:41.083 real 0m11.883s 00:15:41.083 user 0m27.622s 00:15:41.083 sys 0m19.712s 00:15:41.083 03:03:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.083 03:03:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:41.083 ************************************ 00:15:41.083 END TEST bdev_fio_rw_verify 00:15:41.083 ************************************ 00:15:41.083 03:03:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:41.083 03:03:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:41.083 03:03:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:41.083 03:03:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "821678ac-475d-4cd0-aca4-5efa7e5d7617"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "821678ac-475d-4cd0-aca4-5efa7e5d7617",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "05574f2a-28da-497f-b1a6-3922ca2c6d3f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "05574f2a-28da-497f-b1a6-3922ca2c6d3f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "52c1182a-3e0e-4c5b-8abd-65fc8bd9eec5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "52c1182a-3e0e-4c5b-8abd-65fc8bd9eec5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "93043cf3-94f9-4a40-be56-7f16725101a1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "93043cf3-94f9-4a40-be56-7f16725101a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a8b362ff-0a6d-4f82-8497-bdc098be484f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a8b362ff-0a6d-4f82-8497-bdc098be484f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ae1e3922-80bc-43ef-9cf4-4d45eaf9669a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ae1e3922-80bc-43ef-9cf4-4d45eaf9669a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:41.084 /home/vagrant/spdk_repo/spdk 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:41.084 00:15:41.084 real 0m12.065s 00:15:41.084 user 0m27.697s 00:15:41.084 sys 0m19.791s 00:15:41.084 ************************************ 00:15:41.084 END TEST bdev_fio 00:15:41.084 ************************************ 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.084 03:03:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:41.084 03:03:34 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:41.084 03:03:34 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:41.084 03:03:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:41.084 03:03:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.084 03:03:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.084 ************************************ 00:15:41.084 START TEST bdev_verify 00:15:41.084 ************************************ 00:15:41.084 03:03:34 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:41.084 [2024-12-10 03:03:34.939228] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:15:41.084 [2024-12-10 03:03:34.939391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72934 ] 00:15:41.084 [2024-12-10 03:03:35.103257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:41.084 [2024-12-10 03:03:35.225105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.084 [2024-12-10 03:03:35.225195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.360 Running I/O for 5 seconds... 00:15:43.674 22464.00 IOPS, 87.75 MiB/s [2024-12-10T03:03:38.998Z] 23456.00 IOPS, 91.62 MiB/s [2024-12-10T03:03:39.933Z] 23434.67 IOPS, 91.54 MiB/s [2024-12-10T03:03:40.867Z] 22912.00 IOPS, 89.50 MiB/s [2024-12-10T03:03:40.867Z] 22982.40 IOPS, 89.77 MiB/s 00:15:46.479 Latency(us) 00:15:46.479 [2024-12-10T03:03:40.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.479 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.479 Verification LBA range: start 0x0 length 0x80000 00:15:46.479 nvme0n1 : 5.01 1788.79 6.99 0.00 0.00 71395.54 6427.57 76223.41 00:15:46.479 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.479 Verification LBA range: start 0x80000 length 0x80000 00:15:46.479 nvme0n1 : 5.02 1759.57 6.87 0.00 0.00 72586.94 12502.25 74206.92 00:15:46.479 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.479 Verification LBA range: start 0x0 length 0x80000 00:15:46.479 nvme0n2 : 5.08 1790.31 6.99 0.00 0.00 71164.67 5747.00 69770.63 00:15:46.479 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.479 Verification LBA range: start 0x80000 length 0x80000 00:15:46.479 nvme0n2 : 5.07 1765.91 6.90 0.00 0.00 72145.61 11695.66 62914.56 00:15:46.479 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.479 Verification LBA range: start 0x0 length 0x80000 00:15:46.479 nvme0n3 : 5.04 1779.35 6.95 0.00 0.00 71428.66 7309.78 78643.20 00:15:46.479 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.479 Verification LBA range: start 0x80000 length 0x80000 00:15:46.479 nvme0n3 : 5.03 1754.45 6.85 0.00 0.00 72443.94 9427.10 69367.34 00:15:46.479 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.479 Verification LBA range: start 0x0 length 0x20000 00:15:46.479 nvme1n1 : 5.06 1794.66 7.01 0.00 0.00 70649.98 5444.53 81062.99 00:15:46.479 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.479 Verification LBA range: start 0x20000 length 0x20000 00:15:46.479 nvme1n1 : 5.06 1769.29 6.91 0.00 0.00 71665.08 7864.32 77433.30 00:15:46.479 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.479 Verification LBA range: start 0x0 length 0xbd0bd 00:15:46.479 nvme2n1 : 5.07 2441.48 9.54 0.00 0.00 51787.25 6049.48 68560.74 00:15:46.479 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.479 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:46.479 nvme2n1 : 5.08 2469.65 9.65 0.00 0.00 51158.39 3428.04 74610.22 00:15:46.479 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.479 Verification LBA range: start 0x0 length 0xa0000 00:15:46.479 nvme3n1 : 5.08 1814.58 7.09 0.00 0.00 69543.96 4789.17 81466.29 00:15:46.480 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.480 Verification LBA range: start 0xa0000 length 0xa0000 00:15:46.480 nvme3n1 : 5.08 1788.11 6.98 0.00 0.00 70571.34 6906.49 76626.71 00:15:46.480 [2024-12-10T03:03:40.868Z] =================================================================================================================== 00:15:46.480 [2024-12-10T03:03:40.868Z] Total : 22716.14 88.73 0.00 0.00 67036.57 3428.04 81466.29 00:15:47.414 00:15:47.414 real 0m6.651s 00:15:47.414 user 0m10.923s 00:15:47.414 sys 0m1.371s 00:15:47.414 03:03:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.414 03:03:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:47.414 ************************************ 00:15:47.414 END TEST bdev_verify 00:15:47.414 ************************************ 00:15:47.414 03:03:41 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:47.414 03:03:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:47.414 03:03:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.414 03:03:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.414 ************************************ 00:15:47.414 START TEST bdev_verify_big_io 00:15:47.414 ************************************ 00:15:47.414 03:03:41 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:47.414 [2024-12-10 03:03:41.623458] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:15:47.414 [2024-12-10 03:03:41.623570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73036 ] 00:15:47.414 [2024-12-10 03:03:41.782954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:47.672 [2024-12-10 03:03:41.878874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.672 [2024-12-10 03:03:41.878952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.239 Running I/O for 5 seconds... 00:15:54.051 1034.00 IOPS, 64.62 MiB/s [2024-12-10T03:03:48.698Z] 2688.50 IOPS, 168.03 MiB/s 00:15:54.310 Latency(us) 00:15:54.310 [2024-12-10T03:03:48.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.310 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.310 Verification LBA range: start 0x0 length 0x8000 00:15:54.310 nvme0n1 : 5.95 83.85 5.24 0.00 0.00 1408366.72 10737.82 1922927.06 00:15:54.310 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.310 Verification LBA range: start 0x8000 length 0x8000 00:15:54.310 nvme0n1 : 5.76 99.97 6.25 0.00 0.00 1203002.90 191970.07 1264743.98 00:15:54.310 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.310 Verification LBA range: start 0x0 length 0x8000 00:15:54.310 nvme0n2 : 6.03 92.80 5.80 0.00 0.00 1274726.15 36700.16 1600288.30 00:15:54.310 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.310 Verification LBA range: start 0x8000 length 0x8000 00:15:54.310 nvme0n2 : 6.06 114.90 7.18 0.00 0.00 1029236.22 185517.29 1058255.16 00:15:54.310 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.310 Verification LBA range: start 0x0 length 0x8000 00:15:54.310 nvme0n3 : 6.11 123.12 7.70 0.00 0.00 937086.23 62511.26 1187310.67 00:15:54.310 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.310 Verification LBA range: start 0x8000 length 0x8000 00:15:54.310 nvme0n3 : 6.06 120.14 7.51 0.00 0.00 953595.33 133895.09 1103424.59 00:15:54.310 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.310 Verification LBA range: start 0x0 length 0x2000 00:15:54.310 nvme1n1 : 6.11 102.13 6.38 0.00 0.00 1081053.16 73400.32 1316366.18 00:15:54.310 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.310 Verification LBA range: start 0x2000 length 0x2000 00:15:54.310 nvme1n1 : 6.11 87.66 5.48 0.00 0.00 1267440.66 126635.72 2503676.85 00:15:54.310 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.310 Verification LBA range: start 0x0 length 0xbd0b 00:15:54.310 nvme2n1 : 6.13 147.14 9.20 0.00 0.00 724441.77 7864.32 1445421.69 00:15:54.310 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.310 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:54.310 nvme2n1 : 6.12 157.98 9.87 0.00 0.00 680658.02 5520.15 1051802.39 00:15:54.310 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.310 Verification LBA range: start 0x0 length 0xa000 00:15:54.310 nvme3n1 : 6.14 130.38 8.15 0.00 0.00 791834.07 1323.32 1793871.56 00:15:54.311 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.311 Verification LBA range: start 0xa000 length 0xa000 00:15:54.311 nvme3n1 : 6.13 126.68 7.92 0.00 0.00 818520.72 1651.00 1329271.73 00:15:54.311 [2024-12-10T03:03:48.699Z] =================================================================================================================== 00:15:54.311 [2024-12-10T03:03:48.699Z] Total : 1386.76 86.67 0.00 0.00 969464.53 1323.32 2503676.85 00:15:55.247 00:15:55.247 real 0m7.816s 00:15:55.247 user 0m14.531s 00:15:55.247 sys 0m0.344s 00:15:55.247 03:03:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.247 ************************************ 00:15:55.247 END TEST bdev_verify_big_io 00:15:55.247 ************************************ 00:15:55.247 03:03:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.247 03:03:49 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:55.247 03:03:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:55.247 03:03:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.247 03:03:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:55.247 ************************************ 00:15:55.247 START TEST bdev_write_zeroes 00:15:55.247 ************************************ 00:15:55.247 03:03:49 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:55.247 [2024-12-10 03:03:49.492196] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:15:55.247 [2024-12-10 03:03:49.492303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73146 ] 00:15:55.505 [2024-12-10 03:03:49.654011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.505 [2024-12-10 03:03:49.747894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.771 Running I/O for 1 seconds... 00:15:57.162 79615.00 IOPS, 311.00 MiB/s 00:15:57.162 Latency(us) 00:15:57.162 [2024-12-10T03:03:51.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.162 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:57.162 nvme0n1 : 1.02 12947.82 50.58 0.00 0.00 9876.18 5847.83 20769.87 00:15:57.162 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:57.162 nvme0n2 : 1.03 12813.42 50.05 0.00 0.00 9971.99 5620.97 22181.42 00:15:57.162 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:57.162 nvme0n3 : 1.03 12798.67 49.99 0.00 0.00 9976.58 5671.38 22181.42 00:15:57.162 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:57.162 nvme1n1 : 1.02 12904.00 50.41 0.00 0.00 9884.18 4713.55 23088.84 00:15:57.162 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:57.162 nvme2n1 : 1.03 14353.80 56.07 0.00 0.00 8878.64 3503.66 21576.47 00:15:57.162 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:57.162 nvme3n1 : 1.02 12962.92 50.64 0.00 0.00 9827.89 5444.53 20467.40 00:15:57.162 [2024-12-10T03:03:51.550Z] =================================================================================================================== 00:15:57.162 [2024-12-10T03:03:51.550Z] Total : 78780.62 307.74 0.00 0.00 9719.59 3503.66 23088.84 00:15:57.729 00:15:57.729 real 0m2.454s 00:15:57.729 user 0m1.820s 00:15:57.729 sys 0m0.451s 00:15:57.729 03:03:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.729 03:03:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:57.729 ************************************ 00:15:57.729 END TEST bdev_write_zeroes 00:15:57.729 ************************************ 00:15:57.729 03:03:51 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:57.729 03:03:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:57.729 03:03:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.729 03:03:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.729 ************************************ 00:15:57.729 START TEST bdev_json_nonenclosed 00:15:57.729 ************************************ 00:15:57.729 03:03:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:57.729 [2024-12-10 03:03:52.012096] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:15:57.729 [2024-12-10 03:03:52.012205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73195 ] 00:15:57.987 [2024-12-10 03:03:52.170710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.987 [2024-12-10 03:03:52.265499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.987 [2024-12-10 03:03:52.265567] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:57.987 [2024-12-10 03:03:52.265584] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:57.987 [2024-12-10 03:03:52.265592] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:58.246 00:15:58.246 real 0m0.490s 00:15:58.246 user 0m0.293s 00:15:58.246 sys 0m0.094s 00:15:58.246 ************************************ 00:15:58.246 END TEST bdev_json_nonenclosed 00:15:58.246 ************************************ 00:15:58.246 03:03:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.246 03:03:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:58.246 03:03:52 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:58.246 03:03:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:58.246 03:03:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.246 03:03:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:58.246 ************************************ 00:15:58.246 START TEST bdev_json_nonarray 00:15:58.246 ************************************ 00:15:58.246 03:03:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:58.246 [2024-12-10 03:03:52.564286] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:15:58.246 [2024-12-10 03:03:52.564415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73220 ] 00:15:58.503 [2024-12-10 03:03:52.726219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.503 [2024-12-10 03:03:52.820228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.503 [2024-12-10 03:03:52.820308] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:58.503 [2024-12-10 03:03:52.820325] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:58.503 [2024-12-10 03:03:52.820334] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:58.762 00:15:58.762 real 0m0.496s 00:15:58.762 user 0m0.296s 00:15:58.762 sys 0m0.095s 00:15:58.762 03:03:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.762 ************************************ 00:15:58.762 END TEST bdev_json_nonarray 00:15:58.762 ************************************ 00:15:58.762 03:03:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:58.762 03:03:53 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:15:58.762 03:03:53 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:15:58.762 03:03:53 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:15:58.762 03:03:53 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:58.762 03:03:53 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:15:58.762 03:03:53 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:58.762 03:03:53 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:58.762 03:03:53 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:58.762 03:03:53 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:58.762 03:03:53 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:58.762 03:03:53 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:58.762 03:03:53 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:59.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:17.487 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:17.487 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:17.487 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:17.745 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:17.746 00:16:17.746 real 1m7.945s 00:16:17.746 user 1m22.258s 00:16:17.746 sys 0m53.196s 00:16:17.746 03:04:11 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.746 03:04:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:17.746 ************************************ 00:16:17.746 END TEST blockdev_xnvme 00:16:17.746 ************************************ 00:16:17.746 03:04:12 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:17.746 03:04:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:17.746 03:04:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.746 03:04:12 -- common/autotest_common.sh@10 -- # set +x 00:16:17.746 ************************************ 00:16:17.746 START TEST ublk 00:16:17.746 ************************************ 00:16:17.746 03:04:12 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:17.746 * Looking for test storage... 00:16:17.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:17.746 03:04:12 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:17.746 03:04:12 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:17.746 03:04:12 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:18.004 03:04:12 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:18.004 03:04:12 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:18.004 03:04:12 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.004 03:04:12 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.004 03:04:12 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.004 03:04:12 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.004 03:04:12 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.004 03:04:12 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.004 03:04:12 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:18.004 03:04:12 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:18.004 03:04:12 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:18.004 03:04:12 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.004 03:04:12 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:18.004 03:04:12 ublk -- scripts/common.sh@345 -- # : 1 00:16:18.004 03:04:12 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.004 03:04:12 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.004 03:04:12 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:18.004 03:04:12 ublk -- scripts/common.sh@353 -- # local d=1 00:16:18.004 03:04:12 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.004 03:04:12 ublk -- scripts/common.sh@355 -- # echo 1 00:16:18.004 03:04:12 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.004 03:04:12 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:18.004 03:04:12 ublk -- scripts/common.sh@353 -- # local d=2 00:16:18.004 03:04:12 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.004 03:04:12 ublk -- scripts/common.sh@355 -- # echo 2 00:16:18.004 03:04:12 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:18.004 03:04:12 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.004 03:04:12 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:18.004 03:04:12 ublk -- scripts/common.sh@368 -- # return 0 00:16:18.004 03:04:12 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.004 03:04:12 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:18.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.004 --rc genhtml_branch_coverage=1 00:16:18.004 --rc genhtml_function_coverage=1 00:16:18.004 --rc genhtml_legend=1 00:16:18.004 --rc geninfo_all_blocks=1 00:16:18.004 --rc geninfo_unexecuted_blocks=1 00:16:18.004 00:16:18.004 ' 00:16:18.004 03:04:12 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:18.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.004 --rc genhtml_branch_coverage=1 00:16:18.004 --rc genhtml_function_coverage=1 00:16:18.004 --rc genhtml_legend=1 00:16:18.004 --rc geninfo_all_blocks=1 00:16:18.004 --rc geninfo_unexecuted_blocks=1 00:16:18.004 00:16:18.004 ' 00:16:18.004 03:04:12 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:18.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.005 --rc genhtml_branch_coverage=1 00:16:18.005 --rc genhtml_function_coverage=1 00:16:18.005 --rc genhtml_legend=1 00:16:18.005 --rc geninfo_all_blocks=1 00:16:18.005 --rc geninfo_unexecuted_blocks=1 00:16:18.005 00:16:18.005 ' 00:16:18.005 03:04:12 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:18.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.005 --rc genhtml_branch_coverage=1 00:16:18.005 --rc genhtml_function_coverage=1 00:16:18.005 --rc genhtml_legend=1 00:16:18.005 --rc geninfo_all_blocks=1 00:16:18.005 --rc geninfo_unexecuted_blocks=1 00:16:18.005 00:16:18.005 ' 00:16:18.005 03:04:12 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:18.005 03:04:12 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:18.005 03:04:12 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:18.005 03:04:12 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:18.005 03:04:12 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:18.005 03:04:12 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:18.005 03:04:12 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:18.005 03:04:12 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:18.005 03:04:12 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:18.005 03:04:12 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:18.005 03:04:12 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:18.005 03:04:12 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:18.005 03:04:12 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:18.005 03:04:12 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:18.005 03:04:12 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:18.005 03:04:12 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:18.005 03:04:12 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:18.005 03:04:12 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:18.005 03:04:12 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:18.005 03:04:12 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:18.005 03:04:12 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:18.005 03:04:12 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.005 03:04:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.005 ************************************ 00:16:18.005 START TEST test_save_ublk_config 00:16:18.005 ************************************ 00:16:18.005 03:04:12 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:18.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.005 03:04:12 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:18.005 03:04:12 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73527 00:16:18.005 03:04:12 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:18.005 03:04:12 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73527 00:16:18.005 03:04:12 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:18.005 03:04:12 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73527 ']' 00:16:18.005 03:04:12 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.005 03:04:12 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.005 03:04:12 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.005 03:04:12 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.005 03:04:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:18.005 [2024-12-10 03:04:12.256130] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:16:18.005 [2024-12-10 03:04:12.256244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73527 ] 00:16:18.263 [2024-12-10 03:04:12.415252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.263 [2024-12-10 03:04:12.508084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.829 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.829 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:18.829 03:04:13 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:18.829 03:04:13 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:18.829 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.829 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:18.829 [2024-12-10 03:04:13.118399] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:18.829 [2024-12-10 03:04:13.119193] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:18.829 malloc0 00:16:18.829 [2024-12-10 03:04:13.174670] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:18.829 [2024-12-10 03:04:13.174745] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:18.829 [2024-12-10 03:04:13.174755] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:18.829 [2024-12-10 03:04:13.174761] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:18.829 [2024-12-10 03:04:13.183467] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:18.829 [2024-12-10 03:04:13.183489] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:18.829 [2024-12-10 03:04:13.187230] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:18.829 [2024-12-10 03:04:13.187324] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:18.829 [2024-12-10 03:04:13.199497] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:18.829 0 00:16:18.829 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.829 03:04:13 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:18.829 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.829 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:19.396 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.396 03:04:13 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:19.396 "subsystems": [ 00:16:19.396 { 00:16:19.396 "subsystem": "fsdev", 00:16:19.396 "config": [ 00:16:19.396 { 00:16:19.396 "method": "fsdev_set_opts", 00:16:19.396 "params": { 00:16:19.396 "fsdev_io_pool_size": 65535, 00:16:19.396 "fsdev_io_cache_size": 256 00:16:19.396 } 00:16:19.396 } 00:16:19.396 ] 00:16:19.396 }, 00:16:19.396 { 00:16:19.396 "subsystem": "keyring", 00:16:19.396 "config": [] 00:16:19.396 }, 00:16:19.396 { 00:16:19.396 "subsystem": "iobuf", 00:16:19.396 "config": [ 00:16:19.396 { 00:16:19.396 "method": "iobuf_set_options", 00:16:19.396 "params": { 00:16:19.396 "small_pool_count": 8192, 00:16:19.396 "large_pool_count": 1024, 00:16:19.396 "small_bufsize": 8192, 00:16:19.396 "large_bufsize": 135168, 00:16:19.396 "enable_numa": false 00:16:19.396 } 00:16:19.396 } 00:16:19.396 ] 00:16:19.396 }, 00:16:19.396 { 00:16:19.396 "subsystem": "sock", 00:16:19.396 "config": [ 00:16:19.396 { 00:16:19.396 "method": "sock_set_default_impl", 00:16:19.396 "params": { 00:16:19.396 "impl_name": "posix" 00:16:19.396 } 00:16:19.396 }, 00:16:19.396 { 00:16:19.396 "method": "sock_impl_set_options", 00:16:19.396 "params": { 00:16:19.396 "impl_name": "ssl", 00:16:19.396 "recv_buf_size": 4096, 00:16:19.396 "send_buf_size": 4096, 00:16:19.396 "enable_recv_pipe": true, 00:16:19.396 "enable_quickack": false, 00:16:19.396 "enable_placement_id": 0, 00:16:19.396 "enable_zerocopy_send_server": true, 00:16:19.396 "enable_zerocopy_send_client": false, 00:16:19.396 "zerocopy_threshold": 0, 00:16:19.396 "tls_version": 0, 00:16:19.396 "enable_ktls": false 00:16:19.396 } 00:16:19.396 }, 00:16:19.396 { 00:16:19.396 "method": "sock_impl_set_options", 00:16:19.396 "params": { 00:16:19.396 "impl_name": "posix", 00:16:19.396 "recv_buf_size": 2097152, 00:16:19.396 "send_buf_size": 2097152, 00:16:19.396 "enable_recv_pipe": true, 00:16:19.396 "enable_quickack": false, 00:16:19.396 "enable_placement_id": 0, 00:16:19.396 "enable_zerocopy_send_server": true, 00:16:19.396 "enable_zerocopy_send_client": false, 00:16:19.396 "zerocopy_threshold": 0, 00:16:19.396 "tls_version": 0, 00:16:19.396 "enable_ktls": false 00:16:19.396 } 00:16:19.396 } 00:16:19.396 ] 00:16:19.396 }, 00:16:19.396 { 00:16:19.396 "subsystem": "vmd", 00:16:19.396 "config": [] 00:16:19.396 }, 00:16:19.396 { 00:16:19.396 "subsystem": "accel", 00:16:19.396 "config": [ 00:16:19.396 { 00:16:19.396 "method": "accel_set_options", 00:16:19.396 "params": { 00:16:19.396 "small_cache_size": 128, 00:16:19.396 "large_cache_size": 16, 00:16:19.396 "task_count": 2048, 00:16:19.396 "sequence_count": 2048, 00:16:19.396 "buf_count": 2048 00:16:19.396 } 00:16:19.396 } 00:16:19.396 ] 00:16:19.396 }, 00:16:19.396 { 00:16:19.396 "subsystem": "bdev", 00:16:19.396 "config": [ 00:16:19.396 { 00:16:19.396 "method": "bdev_set_options", 00:16:19.396 "params": { 00:16:19.396 "bdev_io_pool_size": 65535, 00:16:19.396 "bdev_io_cache_size": 256, 00:16:19.396 "bdev_auto_examine": true, 00:16:19.396 "iobuf_small_cache_size": 128, 00:16:19.396 "iobuf_large_cache_size": 16 00:16:19.396 } 00:16:19.396 }, 00:16:19.396 { 00:16:19.396 "method": "bdev_raid_set_options", 00:16:19.396 "params": { 00:16:19.396 "process_window_size_kb": 1024, 00:16:19.396 "process_max_bandwidth_mb_sec": 0 00:16:19.396 } 00:16:19.396 }, 00:16:19.396 { 00:16:19.396 "method": "bdev_iscsi_set_options", 00:16:19.396 "params": { 00:16:19.396 "timeout_sec": 30 00:16:19.396 } 00:16:19.396 }, 00:16:19.396 { 00:16:19.396 "method": "bdev_nvme_set_options", 00:16:19.396 "params": { 00:16:19.396 "action_on_timeout": "none", 00:16:19.396 "timeout_us": 0, 00:16:19.396 "timeout_admin_us": 0, 00:16:19.396 "keep_alive_timeout_ms": 10000, 00:16:19.396 "arbitration_burst": 0, 00:16:19.396 "low_priority_weight": 0, 00:16:19.396 "medium_priority_weight": 0, 00:16:19.396 "high_priority_weight": 0, 00:16:19.396 "nvme_adminq_poll_period_us": 10000, 00:16:19.396 "nvme_ioq_poll_period_us": 0, 00:16:19.396 "io_queue_requests": 0, 00:16:19.396 "delay_cmd_submit": true, 00:16:19.397 "transport_retry_count": 4, 00:16:19.397 "bdev_retry_count": 3, 00:16:19.397 "transport_ack_timeout": 0, 00:16:19.397 "ctrlr_loss_timeout_sec": 0, 00:16:19.397 "reconnect_delay_sec": 0, 00:16:19.397 "fast_io_fail_timeout_sec": 0, 00:16:19.397 "disable_auto_failback": false, 00:16:19.397 "generate_uuids": false, 00:16:19.397 "transport_tos": 0, 00:16:19.397 "nvme_error_stat": false, 00:16:19.397 "rdma_srq_size": 0, 00:16:19.397 "io_path_stat": false, 00:16:19.397 "allow_accel_sequence": false, 00:16:19.397 "rdma_max_cq_size": 0, 00:16:19.397 "rdma_cm_event_timeout_ms": 0, 00:16:19.397 "dhchap_digests": [ 00:16:19.397 "sha256", 00:16:19.397 "sha384", 00:16:19.397 "sha512" 00:16:19.397 ], 00:16:19.397 "dhchap_dhgroups": [ 00:16:19.397 "null", 00:16:19.397 "ffdhe2048", 00:16:19.397 "ffdhe3072", 00:16:19.397 "ffdhe4096", 00:16:19.397 "ffdhe6144", 00:16:19.397 "ffdhe8192" 00:16:19.397 ] 00:16:19.397 } 00:16:19.397 }, 00:16:19.397 { 00:16:19.397 "method": "bdev_nvme_set_hotplug", 00:16:19.397 "params": { 00:16:19.397 "period_us": 100000, 00:16:19.397 "enable": false 00:16:19.397 } 00:16:19.397 }, 00:16:19.397 { 00:16:19.397 "method": "bdev_malloc_create", 00:16:19.397 "params": { 00:16:19.397 "name": "malloc0", 00:16:19.397 "num_blocks": 8192, 00:16:19.397 "block_size": 4096, 00:16:19.397 "physical_block_size": 4096, 00:16:19.397 "uuid": "bc3ae5c5-c18e-4c84-b34a-7cb0f69d04e8", 00:16:19.397 "optimal_io_boundary": 0, 00:16:19.397 "md_size": 0, 00:16:19.397 "dif_type": 0, 00:16:19.397 "dif_is_head_of_md": false, 00:16:19.397 "dif_pi_format": 0 00:16:19.397 } 00:16:19.397 }, 00:16:19.397 { 00:16:19.397 "method": "bdev_wait_for_examine" 00:16:19.397 } 00:16:19.397 ] 00:16:19.397 }, 00:16:19.397 { 00:16:19.397 "subsystem": "scsi", 00:16:19.397 "config": null 00:16:19.397 }, 00:16:19.397 { 00:16:19.397 "subsystem": "scheduler", 00:16:19.397 "config": [ 00:16:19.397 { 00:16:19.397 "method": "framework_set_scheduler", 00:16:19.397 "params": { 00:16:19.397 "name": "static" 00:16:19.397 } 00:16:19.397 } 00:16:19.397 ] 00:16:19.397 }, 00:16:19.397 { 00:16:19.397 "subsystem": "vhost_scsi", 00:16:19.397 "config": [] 00:16:19.397 }, 00:16:19.397 { 00:16:19.397 "subsystem": "vhost_blk", 00:16:19.397 "config": [] 00:16:19.397 }, 00:16:19.397 { 00:16:19.397 "subsystem": "ublk", 00:16:19.397 "config": [ 00:16:19.397 { 00:16:19.397 "method": "ublk_create_target", 00:16:19.397 "params": { 00:16:19.397 "cpumask": "1" 00:16:19.397 } 00:16:19.397 }, 00:16:19.397 { 00:16:19.397 "method": "ublk_start_disk", 00:16:19.397 "params": { 00:16:19.397 "bdev_name": "malloc0", 00:16:19.397 "ublk_id": 0, 00:16:19.397 "num_queues": 1, 00:16:19.397 "queue_depth": 128 00:16:19.397 } 00:16:19.397 } 00:16:19.397 ] 00:16:19.397 }, 00:16:19.397 { 00:16:19.397 "subsystem": "nbd", 00:16:19.397 "config": [] 00:16:19.397 }, 00:16:19.397 { 00:16:19.397 "subsystem": "nvmf", 00:16:19.397 "config": [ 00:16:19.397 { 00:16:19.397 "method": "nvmf_set_config", 00:16:19.397 "params": { 00:16:19.397 "discovery_filter": "match_any", 00:16:19.397 "admin_cmd_passthru": { 00:16:19.397 "identify_ctrlr": false 00:16:19.397 }, 00:16:19.397 "dhchap_digests": [ 00:16:19.397 "sha256", 00:16:19.397 "sha384", 00:16:19.397 "sha512" 00:16:19.397 ], 00:16:19.397 "dhchap_dhgroups": [ 00:16:19.397 "null", 00:16:19.397 "ffdhe2048", 00:16:19.397 "ffdhe3072", 00:16:19.397 "ffdhe4096", 00:16:19.397 "ffdhe6144", 00:16:19.397 "ffdhe8192" 00:16:19.397 ] 00:16:19.397 } 00:16:19.397 }, 00:16:19.397 { 00:16:19.397 "method": "nvmf_set_max_subsystems", 00:16:19.397 "params": { 00:16:19.397 "max_subsystems": 1024 00:16:19.397 } 00:16:19.397 }, 00:16:19.397 { 00:16:19.397 "method": "nvmf_set_crdt", 00:16:19.397 "params": { 00:16:19.397 "crdt1": 0, 00:16:19.397 "crdt2": 0, 00:16:19.397 "crdt3": 0 00:16:19.397 } 00:16:19.397 } 00:16:19.397 ] 00:16:19.397 }, 00:16:19.397 { 00:16:19.397 "subsystem": "iscsi", 00:16:19.397 "config": [ 00:16:19.397 { 00:16:19.397 "method": "iscsi_set_options", 00:16:19.397 "params": { 00:16:19.397 "node_base": "iqn.2016-06.io.spdk", 00:16:19.397 "max_sessions": 128, 00:16:19.397 "max_connections_per_session": 2, 00:16:19.397 "max_queue_depth": 64, 00:16:19.397 "default_time2wait": 2, 00:16:19.397 "default_time2retain": 20, 00:16:19.397 "first_burst_length": 8192, 00:16:19.397 "immediate_data": true, 00:16:19.397 "allow_duplicated_isid": false, 00:16:19.397 "error_recovery_level": 0, 00:16:19.397 "nop_timeout": 60, 00:16:19.397 "nop_in_interval": 30, 00:16:19.397 "disable_chap": false, 00:16:19.397 "require_chap": false, 00:16:19.397 "mutual_chap": false, 00:16:19.397 "chap_group": 0, 00:16:19.397 "max_large_datain_per_connection": 64, 00:16:19.397 "max_r2t_per_connection": 4, 00:16:19.397 "pdu_pool_size": 36864, 00:16:19.397 "immediate_data_pool_size": 16384, 00:16:19.397 "data_out_pool_size": 2048 00:16:19.397 } 00:16:19.397 } 00:16:19.397 ] 00:16:19.397 } 00:16:19.397 ] 00:16:19.397 }' 00:16:19.397 03:04:13 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73527 00:16:19.397 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73527 ']' 00:16:19.397 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73527 00:16:19.397 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:19.397 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.397 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73527 00:16:19.397 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.397 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.397 killing process with pid 73527 00:16:19.397 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73527' 00:16:19.397 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73527 00:16:19.397 03:04:13 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73527 00:16:20.332 [2024-12-10 03:04:14.543677] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:20.332 [2024-12-10 03:04:14.576474] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:20.332 [2024-12-10 03:04:14.576591] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:20.332 [2024-12-10 03:04:14.586405] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:20.332 [2024-12-10 03:04:14.586452] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:20.332 [2024-12-10 03:04:14.586464] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:20.332 [2024-12-10 03:04:14.586488] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:20.332 [2024-12-10 03:04:14.586628] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:21.706 03:04:15 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:21.706 03:04:15 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73576 00:16:21.706 03:04:15 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73576 00:16:21.706 03:04:15 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73576 ']' 00:16:21.706 03:04:15 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.706 03:04:15 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.706 03:04:15 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:21.706 "subsystems": [ 00:16:21.706 { 00:16:21.706 "subsystem": "fsdev", 00:16:21.706 "config": [ 00:16:21.706 { 00:16:21.706 "method": "fsdev_set_opts", 00:16:21.706 "params": { 00:16:21.706 "fsdev_io_pool_size": 65535, 00:16:21.706 "fsdev_io_cache_size": 256 00:16:21.706 } 00:16:21.706 } 00:16:21.706 ] 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "subsystem": "keyring", 00:16:21.706 "config": [] 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "subsystem": "iobuf", 00:16:21.706 "config": [ 00:16:21.706 { 00:16:21.706 "method": "iobuf_set_options", 00:16:21.706 "params": { 00:16:21.706 "small_pool_count": 8192, 00:16:21.706 "large_pool_count": 1024, 00:16:21.706 "small_bufsize": 8192, 00:16:21.706 "large_bufsize": 135168, 00:16:21.706 "enable_numa": false 00:16:21.706 } 00:16:21.706 } 00:16:21.706 ] 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "subsystem": "sock", 00:16:21.706 "config": [ 00:16:21.706 { 00:16:21.706 "method": "sock_set_default_impl", 00:16:21.706 "params": { 00:16:21.706 "impl_name": "posix" 00:16:21.706 } 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "method": "sock_impl_set_options", 00:16:21.706 "params": { 00:16:21.706 "impl_name": "ssl", 00:16:21.706 "recv_buf_size": 4096, 00:16:21.706 "send_buf_size": 4096, 00:16:21.706 "enable_recv_pipe": true, 00:16:21.706 "enable_quickack": false, 00:16:21.706 "enable_placement_id": 0, 00:16:21.706 "enable_zerocopy_send_server": true, 00:16:21.706 "enable_zerocopy_send_client": false, 00:16:21.706 "zerocopy_threshold": 0, 00:16:21.706 "tls_version": 0, 00:16:21.706 "enable_ktls": false 00:16:21.706 } 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "method": "sock_impl_set_options", 00:16:21.706 "params": { 00:16:21.706 "impl_name": "posix", 00:16:21.706 "recv_buf_size": 2097152, 00:16:21.706 "send_buf_size": 2097152, 00:16:21.706 "enable_recv_pipe": true, 00:16:21.706 "enable_quickack": false, 00:16:21.706 "enable_placement_id": 0, 00:16:21.706 "enable_zerocopy_send_server": true, 00:16:21.706 "enable_zerocopy_send_client": false, 00:16:21.706 "zerocopy_threshold": 0, 00:16:21.706 "tls_version": 0, 00:16:21.706 "enable_ktls": false 00:16:21.706 } 00:16:21.706 } 00:16:21.706 ] 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "subsystem": "vmd", 00:16:21.706 "config": [] 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "subsystem": "accel", 00:16:21.706 "config": [ 00:16:21.706 { 00:16:21.706 "method": "accel_set_options", 00:16:21.706 "params": { 00:16:21.706 "small_cache_size": 128, 00:16:21.706 "large_cache_size": 16, 00:16:21.706 "task_count": 2048, 00:16:21.706 "sequence_count": 2048, 00:16:21.706 "buf_count": 2048 00:16:21.706 } 00:16:21.706 } 00:16:21.706 ] 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "subsystem": "bdev", 00:16:21.706 "config": [ 00:16:21.706 { 00:16:21.706 "method": "bdev_set_options", 00:16:21.706 "params": { 00:16:21.706 "bdev_io_pool_size": 65535, 00:16:21.706 "bdev_io_cache_size": 256, 00:16:21.706 "bdev_auto_examine": true, 00:16:21.706 "iobuf_small_cache_size": 128, 00:16:21.706 "iobuf_large_cache_size": 16 00:16:21.706 } 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "method": "bdev_raid_set_options", 00:16:21.706 "params": { 00:16:21.706 "process_window_size_kb": 1024, 00:16:21.706 "process_max_bandwidth_mb_sec": 0 00:16:21.706 } 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "method": "bdev_iscsi_set_options", 00:16:21.706 "params": { 00:16:21.706 "timeout_sec": 30 00:16:21.706 } 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "method": "bdev_nvme_set_options", 00:16:21.706 "params": { 00:16:21.706 "action_on_timeout": "none", 00:16:21.706 "timeout_us": 0, 00:16:21.706 "timeout_admin_us": 0, 00:16:21.706 "keep_alive_timeout_ms": 10000, 00:16:21.706 "arbitration_burst": 0, 00:16:21.706 "low_priority_weight": 0, 00:16:21.706 "medium_priority_weight": 0, 00:16:21.706 "high_priority_weight": 0, 00:16:21.706 "nvme_adminq_poll_period_us": 10000, 00:16:21.706 "nvme_ioq_poll_period_us": 0, 00:16:21.706 "io_queue_requests": 0, 00:16:21.706 "delay_cmd_submit": true, 00:16:21.706 "transport_retry_count": 4, 00:16:21.706 "bdev_retry_count": 3, 00:16:21.706 "transport_ack_timeout": 0, 00:16:21.706 "ctrlr_loss_timeout_sec": 0, 00:16:21.706 "reconnect_delay_sec": 0, 00:16:21.706 "fast_io_fail_timeout_sec": 0, 00:16:21.706 "disable_auto_failback": false, 00:16:21.706 "generate_uuids": false, 00:16:21.706 "transport_tos": 0, 00:16:21.706 "nvme_error_stat": false, 00:16:21.706 "rdma_srq_size": 0, 00:16:21.706 "io_path_stat": false, 00:16:21.706 "allow_accel_sequence": false, 00:16:21.706 "rdma_max_cq_size": 0, 00:16:21.706 "rdma_cm_event_timeout_ms": 0, 00:16:21.706 "dhchap_digests": [ 00:16:21.706 "sha256", 00:16:21.706 "sha384", 00:16:21.706 "sha512" 00:16:21.706 ], 00:16:21.706 "dhchap_dhgroups": [ 00:16:21.706 "null", 00:16:21.706 "ffdhe2048", 00:16:21.706 "ffdhe3072", 00:16:21.706 "ffdhe4096", 00:16:21.706 "ffdhe6144", 00:16:21.706 "ffdhe8192" 00:16:21.706 ] 00:16:21.706 } 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "method": "bdev_nvme_set_hotplug", 00:16:21.706 "params": { 00:16:21.706 "period_us": 100000, 00:16:21.706 "enable": false 00:16:21.706 } 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "method": "bdev_malloc_create", 00:16:21.706 "params": { 00:16:21.706 "name": "malloc0", 00:16:21.706 "num_blocks": 8192, 00:16:21.706 "block_size": 4096, 00:16:21.706 "physical_block_size": 4096, 00:16:21.706 "uuid": "bc3ae5c5-c18e-4c84-b34a-7cb0f69d04e8", 00:16:21.706 "optimal_io_boundary": 0, 00:16:21.706 "md_size": 0, 00:16:21.706 "dif_type": 0, 00:16:21.706 "dif_is_head_of_md": false, 00:16:21.706 "dif_pi_format": 0 00:16:21.706 } 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "method": "bdev_wait_for_examine" 00:16:21.706 } 00:16:21.706 ] 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "subsystem": "scsi", 00:16:21.706 "config": null 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "subsystem": "scheduler", 00:16:21.706 "config": [ 00:16:21.706 { 00:16:21.706 "method": "framework_set_scheduler", 00:16:21.706 "params": { 00:16:21.706 "name": "static" 00:16:21.706 } 00:16:21.706 } 00:16:21.706 ] 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "subsystem": "vhost_scsi", 00:16:21.706 "config": [] 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "subsystem": "vhost_blk", 00:16:21.706 "config": [] 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "subsystem": "ublk", 00:16:21.706 "config": [ 00:16:21.706 { 00:16:21.706 "method": "ublk_create_target", 00:16:21.706 "params": { 00:16:21.706 "cpumask": "1" 00:16:21.706 } 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "method": "ublk_start_disk", 00:16:21.706 "params": { 00:16:21.706 "bdev_name": "malloc0", 00:16:21.706 "ublk_id": 0, 00:16:21.706 "num_queues": 1, 00:16:21.706 "queue_depth": 128 00:16:21.706 } 00:16:21.706 } 00:16:21.706 ] 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "subsystem": "nbd", 00:16:21.706 "config": [] 00:16:21.706 }, 00:16:21.706 { 00:16:21.706 "subsystem": "nvmf", 00:16:21.706 "config": [ 00:16:21.706 { 00:16:21.706 "method": "nvmf_set_config", 00:16:21.706 "params": { 00:16:21.706 "discovery_filter": "match_any", 00:16:21.706 "admin_cmd_passthru": { 00:16:21.706 "identify_ctrlr": false 00:16:21.706 }, 00:16:21.706 "dhchap_digests": [ 00:16:21.706 "sha256", 00:16:21.706 "sha384", 00:16:21.706 "sha512" 00:16:21.706 ], 00:16:21.706 "dhchap_dhgroups": [ 00:16:21.706 "null", 00:16:21.706 "ffdhe2048", 00:16:21.706 "ffdhe3072", 00:16:21.706 "ffdhe4096", 00:16:21.707 "ffdhe6144", 00:16:21.707 "ffdhe8192" 00:16:21.707 ] 00:16:21.707 } 00:16:21.707 }, 00:16:21.707 { 00:16:21.707 "method": "nvmf_set_max_subsystems", 00:16:21.707 "params": { 00:16:21.707 "max_subsystems": 1024 00:16:21.707 } 00:16:21.707 }, 00:16:21.707 { 00:16:21.707 "method": "nvmf_set_crdt", 00:16:21.707 "params": { 00:16:21.707 "crdt1": 0, 00:16:21.707 "crdt2": 0, 00:16:21.707 "crdt3": 0 00:16:21.707 } 00:16:21.707 } 00:16:21.707 ] 00:16:21.707 }, 00:16:21.707 { 00:16:21.707 "subsystem": "iscsi", 00:16:21.707 "config": [ 00:16:21.707 { 00:16:21.707 "method": "iscsi_set_options", 00:16:21.707 "params": { 00:16:21.707 "node_base": "iqn.2016-06.io.spdk", 00:16:21.707 "max_sessions": 128, 00:16:21.707 "max_connections_per_session": 2, 00:16:21.707 "max_queue_depth": 64, 00:16:21.707 "default_time2wait": 2, 00:16:21.707 "default_time2retain": 20, 00:16:21.707 "first_burst_length": 8192, 00:16:21.707 "immediate_data": true, 00:16:21.707 "allow_duplicated_isid": false, 00:16:21.707 "error_recovery_level": 0, 00:16:21.707 "nop_timeout": 60, 00:16:21.707 "nop_in_interval": 30, 00:16:21.707 "disable_chap": false, 00:16:21.707 "require_chap": false, 00:16:21.707 "mutual_chap": false, 00:16:21.707 "chap_group": 0, 00:16:21.707 "max_large_datain_per_connection": 64, 00:16:21.707 "max_r2t_per_connection": 4, 00:16:21.707 "pdu_pool_size": 36864, 00:16:21.707 "immediate_data_pool_size": 16384, 00:16:21.707 "data_out_pool_size": 2048 00:16:21.707 } 00:16:21.707 } 00:16:21.707 ] 00:16:21.707 } 00:16:21.707 ] 00:16:21.707 }' 00:16:21.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.707 03:04:15 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.707 03:04:15 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.707 03:04:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:21.707 [2024-12-10 03:04:15.858514] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:16:21.707 [2024-12-10 03:04:15.858628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73576 ] 00:16:21.707 [2024-12-10 03:04:16.016906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.965 [2024-12-10 03:04:16.109678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.531 [2024-12-10 03:04:16.858395] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:22.531 [2024-12-10 03:04:16.859161] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:22.531 [2024-12-10 03:04:16.866503] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:22.531 [2024-12-10 03:04:16.866570] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:22.531 [2024-12-10 03:04:16.866579] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:22.531 [2024-12-10 03:04:16.866586] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:22.531 [2024-12-10 03:04:16.875452] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:22.531 [2024-12-10 03:04:16.875471] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:22.531 [2024-12-10 03:04:16.882397] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:22.531 [2024-12-10 03:04:16.882483] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:22.531 [2024-12-10 03:04:16.899396] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73576 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73576 ']' 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73576 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73576 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.789 killing process with pid 73576 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73576' 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73576 00:16:22.789 03:04:16 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73576 00:16:24.163 [2024-12-10 03:04:18.125710] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:24.163 [2024-12-10 03:04:18.158468] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:24.163 [2024-12-10 03:04:18.158598] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:24.163 [2024-12-10 03:04:18.168401] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:24.163 [2024-12-10 03:04:18.168450] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:24.163 [2024-12-10 03:04:18.168457] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:24.163 [2024-12-10 03:04:18.168480] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:24.163 [2024-12-10 03:04:18.168615] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:25.096 03:04:19 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:25.096 ************************************ 00:16:25.096 END TEST test_save_ublk_config 00:16:25.096 ************************************ 00:16:25.096 00:16:25.096 real 0m7.160s 00:16:25.096 user 0m5.027s 00:16:25.097 sys 0m2.727s 00:16:25.097 03:04:19 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.097 03:04:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:25.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.097 03:04:19 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73650 00:16:25.097 03:04:19 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:25.097 03:04:19 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73650 00:16:25.097 03:04:19 ublk -- common/autotest_common.sh@835 -- # '[' -z 73650 ']' 00:16:25.097 03:04:19 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.097 03:04:19 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:25.097 03:04:19 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.097 03:04:19 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.097 03:04:19 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.097 03:04:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.097 [2024-12-10 03:04:19.433074] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:16:25.097 [2024-12-10 03:04:19.433170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73650 ] 00:16:25.355 [2024-12-10 03:04:19.582318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:25.355 [2024-12-10 03:04:19.659756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.355 [2024-12-10 03:04:19.659762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.289 03:04:20 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.289 03:04:20 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:26.289 03:04:20 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:26.289 03:04:20 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:26.289 03:04:20 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.289 03:04:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.289 ************************************ 00:16:26.289 START TEST test_create_ublk 00:16:26.289 ************************************ 00:16:26.289 03:04:20 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:26.289 03:04:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.289 03:04:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.289 [2024-12-10 03:04:20.341398] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:26.289 [2024-12-10 03:04:20.342898] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:26.289 03:04:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:26.289 03:04:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.289 03:04:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.289 03:04:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:26.289 03:04:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.289 03:04:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.289 [2024-12-10 03:04:20.500488] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:26.289 [2024-12-10 03:04:20.500780] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:26.289 [2024-12-10 03:04:20.500794] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:26.289 [2024-12-10 03:04:20.500800] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:26.289 [2024-12-10 03:04:20.509542] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:26.289 [2024-12-10 03:04:20.509558] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:26.289 [2024-12-10 03:04:20.516403] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:26.289 [2024-12-10 03:04:20.516884] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:26.289 [2024-12-10 03:04:20.529405] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:26.289 03:04:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:26.289 03:04:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.289 03:04:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.289 03:04:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:26.289 { 00:16:26.289 "ublk_device": "/dev/ublkb0", 00:16:26.289 "id": 0, 00:16:26.289 "queue_depth": 512, 00:16:26.289 "num_queues": 4, 00:16:26.289 "bdev_name": "Malloc0" 00:16:26.289 } 00:16:26.289 ]' 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:26.289 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:26.547 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:26.547 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:26.547 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:26.547 03:04:20 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:26.547 03:04:20 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:26.547 03:04:20 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:26.547 03:04:20 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:26.547 03:04:20 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:26.547 03:04:20 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:26.547 03:04:20 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:26.547 03:04:20 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:26.547 03:04:20 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:26.547 03:04:20 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:26.547 03:04:20 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:26.547 03:04:20 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:26.547 fio: verification read phase will never start because write phase uses all of runtime 00:16:26.547 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:26.547 fio-3.35 00:16:26.547 Starting 1 process 00:16:38.749 00:16:38.749 fio_test: (groupid=0, jobs=1): err= 0: pid=73695: Tue Dec 10 03:04:30 2024 00:16:38.749 write: IOPS=13.3k, BW=52.1MiB/s (54.7MB/s)(521MiB/10001msec); 0 zone resets 00:16:38.749 clat (usec): min=37, max=7984, avg=74.19, stdev=136.32 00:16:38.749 lat (usec): min=38, max=7993, avg=74.62, stdev=136.34 00:16:38.749 clat percentiles (usec): 00:16:38.749 | 1.00th=[ 47], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 63], 00:16:38.749 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 70], 00:16:38.749 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 79], 95.00th=[ 83], 00:16:38.749 | 99.00th=[ 92], 99.50th=[ 103], 99.90th=[ 3032], 99.95th=[ 3490], 00:16:38.749 | 99.99th=[ 3949] 00:16:38.749 bw ( KiB/s): min=34600, max=60336, per=99.86%, avg=53295.16, stdev=5717.03, samples=19 00:16:38.749 iops : min= 8650, max=15084, avg=13323.79, stdev=1429.26, samples=19 00:16:38.749 lat (usec) : 50=2.07%, 100=97.38%, 250=0.25%, 500=0.04%, 750=0.01% 00:16:38.749 lat (usec) : 1000=0.01% 00:16:38.749 lat (msec) : 2=0.06%, 4=0.16%, 10=0.01% 00:16:38.749 cpu : usr=1.81%, sys=12.66%, ctx=133438, majf=0, minf=796 00:16:38.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:38.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.749 issued rwts: total=0,133437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:38.749 00:16:38.749 Run status group 0 (all jobs): 00:16:38.750 WRITE: bw=52.1MiB/s (54.7MB/s), 52.1MiB/s-52.1MiB/s (54.7MB/s-54.7MB/s), io=521MiB (547MB), run=10001-10001msec 00:16:38.750 00:16:38.750 Disk stats (read/write): 00:16:38.750 ublkb0: ios=0/131992, merge=0/0, ticks=0/8357, in_queue=8357, util=99.05% 00:16:38.750 03:04:30 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:38.750 03:04:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.750 03:04:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.750 [2024-12-10 03:04:30.943610] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:38.750 [2024-12-10 03:04:30.978432] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:38.750 [2024-12-10 03:04:30.979141] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:38.750 [2024-12-10 03:04:30.986406] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:38.750 [2024-12-10 03:04:30.986669] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:38.750 [2024-12-10 03:04:30.986684] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:38.750 03:04:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.750 03:04:30 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:38.750 03:04:30 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:38.750 03:04:30 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:38.750 03:04:30 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:38.750 03:04:30 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.750 03:04:30 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:38.750 03:04:30 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.750 03:04:30 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:38.750 03:04:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.750 03:04:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.750 [2024-12-10 03:04:31.002470] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:38.750 request: 00:16:38.750 { 00:16:38.750 "ublk_id": 0, 00:16:38.750 "method": "ublk_stop_disk", 00:16:38.750 "req_id": 1 00:16:38.750 } 00:16:38.750 Got JSON-RPC error response 00:16:38.750 response: 00:16:38.750 { 00:16:38.750 "code": -19, 00:16:38.750 "message": "No such device" 00:16:38.750 } 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:38.750 03:04:31 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.750 [2024-12-10 03:04:31.018466] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:38.750 [2024-12-10 03:04:31.026392] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:38.750 [2024-12-10 03:04:31.026428] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.750 03:04:31 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.750 03:04:31 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:38.750 03:04:31 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.750 03:04:31 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:38.750 03:04:31 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:38.750 03:04:31 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:38.750 03:04:31 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.750 03:04:31 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:38.750 03:04:31 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:38.750 ************************************ 00:16:38.750 END TEST test_create_ublk 00:16:38.750 ************************************ 00:16:38.750 03:04:31 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:38.750 00:16:38.750 real 0m11.163s 00:16:38.750 user 0m0.479s 00:16:38.750 sys 0m1.341s 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.750 03:04:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.750 03:04:31 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:38.750 03:04:31 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:38.750 03:04:31 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.750 03:04:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.750 ************************************ 00:16:38.750 START TEST test_create_multi_ublk 00:16:38.750 ************************************ 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.750 [2024-12-10 03:04:31.549387] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:38.750 [2024-12-10 03:04:31.551133] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.750 [2024-12-10 03:04:31.801515] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:38.750 [2024-12-10 03:04:31.801842] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:38.750 [2024-12-10 03:04:31.801855] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:38.750 [2024-12-10 03:04:31.801864] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:38.750 [2024-12-10 03:04:31.813444] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:38.750 [2024-12-10 03:04:31.813466] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:38.750 [2024-12-10 03:04:31.825401] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:38.750 [2024-12-10 03:04:31.825950] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:38.750 [2024-12-10 03:04:31.841412] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.750 03:04:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.750 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.750 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:38.750 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:38.750 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.750 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.750 [2024-12-10 03:04:32.071505] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:38.750 [2024-12-10 03:04:32.071839] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:38.750 [2024-12-10 03:04:32.071854] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:38.750 [2024-12-10 03:04:32.071860] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:38.750 [2024-12-10 03:04:32.080610] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:38.750 [2024-12-10 03:04:32.080628] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:38.750 [2024-12-10 03:04:32.087405] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:38.750 [2024-12-10 03:04:32.087963] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:38.750 [2024-12-10 03:04:32.096438] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:38.750 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.750 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:38.750 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.750 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:38.750 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.750 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.751 [2024-12-10 03:04:32.271507] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:38.751 [2024-12-10 03:04:32.271845] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:38.751 [2024-12-10 03:04:32.271857] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:38.751 [2024-12-10 03:04:32.271865] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:38.751 [2024-12-10 03:04:32.283413] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:38.751 [2024-12-10 03:04:32.283436] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:38.751 [2024-12-10 03:04:32.291394] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:38.751 [2024-12-10 03:04:32.291960] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:38.751 [2024-12-10 03:04:32.296935] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.751 [2024-12-10 03:04:32.474519] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:38.751 [2024-12-10 03:04:32.474842] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:38.751 [2024-12-10 03:04:32.474857] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:38.751 [2024-12-10 03:04:32.474863] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:38.751 [2024-12-10 03:04:32.482420] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:38.751 [2024-12-10 03:04:32.482438] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:38.751 [2024-12-10 03:04:32.490405] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:38.751 [2024-12-10 03:04:32.490943] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:38.751 [2024-12-10 03:04:32.494026] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:38.751 { 00:16:38.751 "ublk_device": "/dev/ublkb0", 00:16:38.751 "id": 0, 00:16:38.751 "queue_depth": 512, 00:16:38.751 "num_queues": 4, 00:16:38.751 "bdev_name": "Malloc0" 00:16:38.751 }, 00:16:38.751 { 00:16:38.751 "ublk_device": "/dev/ublkb1", 00:16:38.751 "id": 1, 00:16:38.751 "queue_depth": 512, 00:16:38.751 "num_queues": 4, 00:16:38.751 "bdev_name": "Malloc1" 00:16:38.751 }, 00:16:38.751 { 00:16:38.751 "ublk_device": "/dev/ublkb2", 00:16:38.751 "id": 2, 00:16:38.751 "queue_depth": 512, 00:16:38.751 "num_queues": 4, 00:16:38.751 "bdev_name": "Malloc2" 00:16:38.751 }, 00:16:38.751 { 00:16:38.751 "ublk_device": "/dev/ublkb3", 00:16:38.751 "id": 3, 00:16:38.751 "queue_depth": 512, 00:16:38.751 "num_queues": 4, 00:16:38.751 "bdev_name": "Malloc3" 00:16:38.751 } 00:16:38.751 ]' 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:38.751 03:04:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:38.751 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:38.751 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.751 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:38.751 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:38.751 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:38.751 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:38.751 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:38.751 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:38.751 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:38.751 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:38.751 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.009 [2024-12-10 03:04:33.162541] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.009 [2024-12-10 03:04:33.194057] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.009 [2024-12-10 03:04:33.195301] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.009 [2024-12-10 03:04:33.201416] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.009 [2024-12-10 03:04:33.201728] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:39.009 [2024-12-10 03:04:33.201745] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.009 [2024-12-10 03:04:33.217509] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.009 [2024-12-10 03:04:33.249464] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.009 [2024-12-10 03:04:33.250468] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.009 [2024-12-10 03:04:33.257413] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.009 [2024-12-10 03:04:33.257688] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:39.009 [2024-12-10 03:04:33.257704] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.009 [2024-12-10 03:04:33.273505] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.009 [2024-12-10 03:04:33.312915] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.009 [2024-12-10 03:04:33.314155] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.009 [2024-12-10 03:04:33.321409] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.009 [2024-12-10 03:04:33.321668] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:39.009 [2024-12-10 03:04:33.321684] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.009 [2024-12-10 03:04:33.337490] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.009 [2024-12-10 03:04:33.369450] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.009 [2024-12-10 03:04:33.370245] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.009 [2024-12-10 03:04:33.378440] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.009 [2024-12-10 03:04:33.378706] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:39.009 [2024-12-10 03:04:33.378721] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.009 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:39.267 [2024-12-10 03:04:33.569477] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:39.267 [2024-12-10 03:04:33.577394] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:39.267 [2024-12-10 03:04:33.577428] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:39.267 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:39.267 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.267 03:04:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:39.267 03:04:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.267 03:04:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.834 03:04:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.834 03:04:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.834 03:04:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:39.834 03:04:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.834 03:04:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.400 03:04:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.400 03:04:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:40.400 03:04:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:40.400 03:04:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.400 03:04:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.658 03:04:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.658 03:04:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:40.658 03:04:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:40.658 03:04:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.658 03:04:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:41.224 ************************************ 00:16:41.224 END TEST test_create_multi_ublk 00:16:41.224 ************************************ 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:41.224 00:16:41.224 real 0m3.920s 00:16:41.224 user 0m0.806s 00:16:41.224 sys 0m0.150s 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.224 03:04:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.224 03:04:35 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:41.224 03:04:35 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:41.224 03:04:35 ublk -- ublk/ublk.sh@130 -- # killprocess 73650 00:16:41.224 03:04:35 ublk -- common/autotest_common.sh@954 -- # '[' -z 73650 ']' 00:16:41.224 03:04:35 ublk -- common/autotest_common.sh@958 -- # kill -0 73650 00:16:41.224 03:04:35 ublk -- common/autotest_common.sh@959 -- # uname 00:16:41.224 03:04:35 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.224 03:04:35 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73650 00:16:41.224 killing process with pid 73650 00:16:41.224 03:04:35 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:41.224 03:04:35 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:41.224 03:04:35 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73650' 00:16:41.224 03:04:35 ublk -- common/autotest_common.sh@973 -- # kill 73650 00:16:41.224 03:04:35 ublk -- common/autotest_common.sh@978 -- # wait 73650 00:16:42.183 [2024-12-10 03:04:36.261684] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:42.183 [2024-12-10 03:04:36.261741] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:42.749 00:16:42.749 real 0m24.985s 00:16:42.749 user 0m36.298s 00:16:42.749 sys 0m9.429s 00:16:42.749 03:04:36 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:42.749 03:04:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:42.749 ************************************ 00:16:42.749 END TEST ublk 00:16:42.749 ************************************ 00:16:42.749 03:04:37 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:42.749 03:04:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:42.749 03:04:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:42.749 03:04:37 -- common/autotest_common.sh@10 -- # set +x 00:16:42.749 ************************************ 00:16:42.749 START TEST ublk_recovery 00:16:42.749 ************************************ 00:16:42.749 03:04:37 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:42.749 * Looking for test storage... 00:16:42.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:42.749 03:04:37 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:42.749 03:04:37 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:42.749 03:04:37 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:43.006 03:04:37 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:43.006 03:04:37 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.006 03:04:37 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.006 03:04:37 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.006 03:04:37 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.006 03:04:37 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.006 03:04:37 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.006 03:04:37 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.006 03:04:37 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.006 03:04:37 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.006 03:04:37 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.006 03:04:37 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.006 03:04:37 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.007 03:04:37 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:43.007 03:04:37 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.007 03:04:37 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.007 --rc genhtml_branch_coverage=1 00:16:43.007 --rc genhtml_function_coverage=1 00:16:43.007 --rc genhtml_legend=1 00:16:43.007 --rc geninfo_all_blocks=1 00:16:43.007 --rc geninfo_unexecuted_blocks=1 00:16:43.007 00:16:43.007 ' 00:16:43.007 03:04:37 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.007 --rc genhtml_branch_coverage=1 00:16:43.007 --rc genhtml_function_coverage=1 00:16:43.007 --rc genhtml_legend=1 00:16:43.007 --rc geninfo_all_blocks=1 00:16:43.007 --rc geninfo_unexecuted_blocks=1 00:16:43.007 00:16:43.007 ' 00:16:43.007 03:04:37 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.007 --rc genhtml_branch_coverage=1 00:16:43.007 --rc genhtml_function_coverage=1 00:16:43.007 --rc genhtml_legend=1 00:16:43.007 --rc geninfo_all_blocks=1 00:16:43.007 --rc geninfo_unexecuted_blocks=1 00:16:43.007 00:16:43.007 ' 00:16:43.007 03:04:37 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.007 --rc genhtml_branch_coverage=1 00:16:43.007 --rc genhtml_function_coverage=1 00:16:43.007 --rc genhtml_legend=1 00:16:43.007 --rc geninfo_all_blocks=1 00:16:43.007 --rc geninfo_unexecuted_blocks=1 00:16:43.007 00:16:43.007 ' 00:16:43.007 03:04:37 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:43.007 03:04:37 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:43.007 03:04:37 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:43.007 03:04:37 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:43.007 03:04:37 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:43.007 03:04:37 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:43.007 03:04:37 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:43.007 03:04:37 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:43.007 03:04:37 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:43.007 03:04:37 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:43.007 03:04:37 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74051 00:16:43.007 03:04:37 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:43.007 03:04:37 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:43.007 03:04:37 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74051 00:16:43.007 03:04:37 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74051 ']' 00:16:43.007 03:04:37 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.007 03:04:37 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.007 03:04:37 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.007 03:04:37 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.007 03:04:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.007 [2024-12-10 03:04:37.247415] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:16:43.007 [2024-12-10 03:04:37.247537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74051 ] 00:16:43.265 [2024-12-10 03:04:37.399466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:43.265 [2024-12-10 03:04:37.493702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.265 [2024-12-10 03:04:37.493758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.832 03:04:38 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.832 03:04:38 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:43.832 03:04:38 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:43.832 03:04:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.832 03:04:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.832 [2024-12-10 03:04:38.076402] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:43.832 [2024-12-10 03:04:38.078444] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:43.832 03:04:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.832 03:04:38 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:43.832 03:04:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.832 03:04:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.832 malloc0 00:16:43.832 03:04:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.832 03:04:38 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:43.832 03:04:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.832 03:04:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.832 [2024-12-10 03:04:38.188529] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:43.832 [2024-12-10 03:04:38.188632] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:43.832 [2024-12-10 03:04:38.188643] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:43.832 [2024-12-10 03:04:38.188651] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:43.832 [2024-12-10 03:04:38.197519] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:43.832 [2024-12-10 03:04:38.197539] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:43.832 [2024-12-10 03:04:38.204410] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:43.832 [2024-12-10 03:04:38.204557] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:44.090 [2024-12-10 03:04:38.220419] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:44.090 1 00:16:44.090 03:04:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.090 03:04:38 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:45.034 03:04:39 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74082 00:16:45.034 03:04:39 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:45.035 03:04:39 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:45.035 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:45.035 fio-3.35 00:16:45.035 Starting 1 process 00:16:50.300 03:04:44 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74051 00:16:50.300 03:04:44 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:55.566 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74051 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:55.566 03:04:49 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74197 00:16:55.566 03:04:49 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:55.566 03:04:49 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:55.566 03:04:49 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74197 00:16:55.566 03:04:49 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74197 ']' 00:16:55.566 03:04:49 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.566 03:04:49 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.566 03:04:49 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.566 03:04:49 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.566 03:04:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.566 [2024-12-10 03:04:49.314565] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:16:55.566 [2024-12-10 03:04:49.314682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74197 ] 00:16:55.566 [2024-12-10 03:04:49.475313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:55.566 [2024-12-10 03:04:49.573295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.566 [2024-12-10 03:04:49.573314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.824 03:04:50 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.824 03:04:50 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:55.824 03:04:50 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:55.824 03:04:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.824 03:04:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.824 [2024-12-10 03:04:50.169396] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:55.824 [2024-12-10 03:04:50.171222] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:55.824 03:04:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.824 03:04:50 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:55.824 03:04:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.824 03:04:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.082 malloc0 00:16:56.082 03:04:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.082 03:04:50 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:56.082 03:04:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.082 03:04:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.082 [2024-12-10 03:04:50.273507] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:56.082 [2024-12-10 03:04:50.273545] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:56.082 [2024-12-10 03:04:50.273555] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:56.082 [2024-12-10 03:04:50.281431] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:56.082 [2024-12-10 03:04:50.281455] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:56.082 1 00:16:56.082 03:04:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.082 03:04:50 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74082 00:16:57.017 [2024-12-10 03:04:51.281485] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:57.017 [2024-12-10 03:04:51.289400] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:57.017 [2024-12-10 03:04:51.289422] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:57.951 [2024-12-10 03:04:52.289446] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:57.951 [2024-12-10 03:04:52.293397] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:57.951 [2024-12-10 03:04:52.293411] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:59.325 [2024-12-10 03:04:53.293430] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:59.325 [2024-12-10 03:04:53.297398] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:59.325 [2024-12-10 03:04:53.297408] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:59.325 [2024-12-10 03:04:53.297415] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:59.325 [2024-12-10 03:04:53.297480] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:21.258 [2024-12-10 03:05:14.615397] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:21.258 [2024-12-10 03:05:14.621913] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:21.258 [2024-12-10 03:05:14.629559] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:21.258 [2024-12-10 03:05:14.629580] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:47.795 00:17:47.795 fio_test: (groupid=0, jobs=1): err= 0: pid=74089: Tue Dec 10 03:05:39 2024 00:17:47.795 read: IOPS=15.0k, BW=58.5MiB/s (61.3MB/s)(3507MiB/60001msec) 00:17:47.795 slat (nsec): min=1107, max=124308, avg=4942.07, stdev=1405.01 00:17:47.795 clat (usec): min=634, max=30404k, avg=4369.78, stdev=264564.78 00:17:47.795 lat (usec): min=639, max=30404k, avg=4374.72, stdev=264564.79 00:17:47.795 clat percentiles (usec): 00:17:47.795 | 1.00th=[ 1663], 5.00th=[ 1762], 10.00th=[ 1795], 20.00th=[ 1811], 00:17:47.795 | 30.00th=[ 1844], 40.00th=[ 1860], 50.00th=[ 1860], 60.00th=[ 1876], 00:17:47.795 | 70.00th=[ 1909], 80.00th=[ 2008], 90.00th=[ 2671], 95.00th=[ 3032], 00:17:47.795 | 99.00th=[ 5014], 99.50th=[ 5473], 99.90th=[ 7046], 99.95th=[ 8848], 00:17:47.795 | 99.99th=[12911] 00:17:47.795 bw ( KiB/s): min=41448, max=130960, per=100.00%, avg=119753.36, stdev=19325.31, samples=59 00:17:47.795 iops : min=10362, max=32740, avg=29938.34, stdev=4831.33, samples=59 00:17:47.795 write: IOPS=14.9k, BW=58.4MiB/s (61.2MB/s)(3502MiB/60001msec); 0 zone resets 00:17:47.795 slat (nsec): min=1134, max=120234, avg=5000.77, stdev=1437.77 00:17:47.795 clat (usec): min=625, max=30404k, avg=4179.42, stdev=248697.31 00:17:47.795 lat (usec): min=630, max=30404k, avg=4184.42, stdev=248697.32 00:17:47.795 clat percentiles (usec): 00:17:47.795 | 1.00th=[ 1713], 5.00th=[ 1844], 10.00th=[ 1876], 20.00th=[ 1909], 00:17:47.795 | 30.00th=[ 1926], 40.00th=[ 1942], 50.00th=[ 1958], 60.00th=[ 1975], 00:17:47.795 | 70.00th=[ 1991], 80.00th=[ 2073], 90.00th=[ 2769], 95.00th=[ 2966], 00:17:47.795 | 99.00th=[ 5080], 99.50th=[ 5538], 99.90th=[ 7111], 99.95th=[ 8848], 00:17:47.795 | 99.99th=[13435] 00:17:47.795 bw ( KiB/s): min=42176, max=130288, per=100.00%, avg=119572.07, stdev=19132.63, samples=59 00:17:47.795 iops : min=10544, max=32572, avg=29893.02, stdev=4783.16, samples=59 00:17:47.795 lat (usec) : 750=0.01%, 1000=0.01% 00:17:47.795 lat (msec) : 2=75.91%, 4=21.28%, 10=2.77%, 20=0.03%, >=2000=0.01% 00:17:47.795 cpu : usr=3.31%, sys=15.36%, ctx=60725, majf=0, minf=14 00:17:47.795 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:47.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:47.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:47.795 issued rwts: total=897829,896525,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:47.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:47.795 00:17:47.795 Run status group 0 (all jobs): 00:17:47.795 READ: bw=58.5MiB/s (61.3MB/s), 58.5MiB/s-58.5MiB/s (61.3MB/s-61.3MB/s), io=3507MiB (3678MB), run=60001-60001msec 00:17:47.795 WRITE: bw=58.4MiB/s (61.2MB/s), 58.4MiB/s-58.4MiB/s (61.2MB/s-61.2MB/s), io=3502MiB (3672MB), run=60001-60001msec 00:17:47.795 00:17:47.795 Disk stats (read/write): 00:17:47.795 ublkb1: ios=894393/893098, merge=0/0, ticks=3870172/3621655, in_queue=7491828, util=99.90% 00:17:47.795 03:05:39 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.795 [2024-12-10 03:05:39.481697] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:47.795 [2024-12-10 03:05:39.517491] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:47.795 [2024-12-10 03:05:39.517625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:47.795 [2024-12-10 03:05:39.525401] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:47.795 [2024-12-10 03:05:39.525484] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:47.795 [2024-12-10 03:05:39.525490] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.795 03:05:39 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.795 [2024-12-10 03:05:39.539474] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:47.795 [2024-12-10 03:05:39.543017] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:47.795 [2024-12-10 03:05:39.543047] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.795 03:05:39 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:47.795 03:05:39 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:47.795 03:05:39 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74197 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74197 ']' 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74197 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74197 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:47.795 killing process with pid 74197 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74197' 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74197 00:17:47.795 03:05:39 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74197 00:17:47.795 [2024-12-10 03:05:40.602922] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:47.795 [2024-12-10 03:05:40.602966] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:47.795 00:17:47.795 real 1m4.262s 00:17:47.795 user 1m46.229s 00:17:47.795 sys 0m22.876s 00:17:47.795 03:05:41 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.795 ************************************ 00:17:47.795 END TEST ublk_recovery 00:17:47.795 ************************************ 00:17:47.795 03:05:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.795 03:05:41 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:47.795 03:05:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:47.795 03:05:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:47.795 03:05:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:47.795 03:05:41 -- common/autotest_common.sh@10 -- # set +x 00:17:47.795 03:05:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:47.795 03:05:41 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:47.795 03:05:41 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:47.795 03:05:41 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:47.795 03:05:41 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:47.795 03:05:41 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:47.796 03:05:41 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:47.796 03:05:41 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:47.796 03:05:41 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:47.796 03:05:41 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:47.796 03:05:41 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:47.796 03:05:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:47.796 03:05:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.796 03:05:41 -- common/autotest_common.sh@10 -- # set +x 00:17:47.796 ************************************ 00:17:47.796 START TEST ftl 00:17:47.796 ************************************ 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:47.796 * Looking for test storage... 00:17:47.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:47.796 03:05:41 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.796 03:05:41 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.796 03:05:41 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.796 03:05:41 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.796 03:05:41 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.796 03:05:41 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.796 03:05:41 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.796 03:05:41 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.796 03:05:41 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.796 03:05:41 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.796 03:05:41 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.796 03:05:41 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:47.796 03:05:41 ftl -- scripts/common.sh@345 -- # : 1 00:17:47.796 03:05:41 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.796 03:05:41 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.796 03:05:41 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:47.796 03:05:41 ftl -- scripts/common.sh@353 -- # local d=1 00:17:47.796 03:05:41 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.796 03:05:41 ftl -- scripts/common.sh@355 -- # echo 1 00:17:47.796 03:05:41 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.796 03:05:41 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:47.796 03:05:41 ftl -- scripts/common.sh@353 -- # local d=2 00:17:47.796 03:05:41 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.796 03:05:41 ftl -- scripts/common.sh@355 -- # echo 2 00:17:47.796 03:05:41 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.796 03:05:41 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.796 03:05:41 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.796 03:05:41 ftl -- scripts/common.sh@368 -- # return 0 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:47.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.796 --rc genhtml_branch_coverage=1 00:17:47.796 --rc genhtml_function_coverage=1 00:17:47.796 --rc genhtml_legend=1 00:17:47.796 --rc geninfo_all_blocks=1 00:17:47.796 --rc geninfo_unexecuted_blocks=1 00:17:47.796 00:17:47.796 ' 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:47.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.796 --rc genhtml_branch_coverage=1 00:17:47.796 --rc genhtml_function_coverage=1 00:17:47.796 --rc genhtml_legend=1 00:17:47.796 --rc geninfo_all_blocks=1 00:17:47.796 --rc geninfo_unexecuted_blocks=1 00:17:47.796 00:17:47.796 ' 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:47.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.796 --rc genhtml_branch_coverage=1 00:17:47.796 --rc genhtml_function_coverage=1 00:17:47.796 --rc genhtml_legend=1 00:17:47.796 --rc geninfo_all_blocks=1 00:17:47.796 --rc geninfo_unexecuted_blocks=1 00:17:47.796 00:17:47.796 ' 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:47.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.796 --rc genhtml_branch_coverage=1 00:17:47.796 --rc genhtml_function_coverage=1 00:17:47.796 --rc genhtml_legend=1 00:17:47.796 --rc geninfo_all_blocks=1 00:17:47.796 --rc geninfo_unexecuted_blocks=1 00:17:47.796 00:17:47.796 ' 00:17:47.796 03:05:41 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:47.796 03:05:41 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:47.796 03:05:41 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.796 03:05:41 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.796 03:05:41 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:47.796 03:05:41 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:47.796 03:05:41 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:47.796 03:05:41 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:47.796 03:05:41 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:47.796 03:05:41 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.796 03:05:41 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.796 03:05:41 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:47.796 03:05:41 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:47.796 03:05:41 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:47.796 03:05:41 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:47.796 03:05:41 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:47.796 03:05:41 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:47.796 03:05:41 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.796 03:05:41 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.796 03:05:41 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:47.796 03:05:41 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:47.796 03:05:41 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:47.796 03:05:41 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:47.796 03:05:41 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:47.796 03:05:41 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:47.796 03:05:41 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:47.796 03:05:41 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:47.796 03:05:41 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:47.796 03:05:41 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:47.796 03:05:41 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:47.796 03:05:41 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:47.796 03:05:41 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:47.796 03:05:41 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:47.796 03:05:41 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:47.796 03:05:41 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:47.796 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:47.796 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.796 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.796 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.796 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.796 03:05:41 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74998 00:17:47.796 03:05:41 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74998 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@835 -- # '[' -z 74998 ']' 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.796 03:05:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:47.796 03:05:41 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:47.796 [2024-12-10 03:05:42.064504] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:47.796 [2024-12-10 03:05:42.065104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74998 ] 00:17:48.054 [2024-12-10 03:05:42.220605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.054 [2024-12-10 03:05:42.296965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.619 03:05:42 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.619 03:05:42 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:48.619 03:05:42 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:48.879 03:05:43 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:49.449 03:05:43 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:49.449 03:05:43 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:50.020 03:05:44 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:50.020 03:05:44 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:50.020 03:05:44 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:50.280 03:05:44 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:50.280 03:05:44 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:50.280 03:05:44 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:50.280 03:05:44 ftl -- ftl/ftl.sh@50 -- # break 00:17:50.280 03:05:44 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:50.280 03:05:44 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:50.280 03:05:44 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:50.280 03:05:44 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:50.541 03:05:44 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:50.541 03:05:44 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:50.541 03:05:44 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:50.541 03:05:44 ftl -- ftl/ftl.sh@63 -- # break 00:17:50.541 03:05:44 ftl -- ftl/ftl.sh@66 -- # killprocess 74998 00:17:50.541 03:05:44 ftl -- common/autotest_common.sh@954 -- # '[' -z 74998 ']' 00:17:50.541 03:05:44 ftl -- common/autotest_common.sh@958 -- # kill -0 74998 00:17:50.541 03:05:44 ftl -- common/autotest_common.sh@959 -- # uname 00:17:50.541 03:05:44 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.541 03:05:44 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74998 00:17:50.541 killing process with pid 74998 00:17:50.541 03:05:44 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:50.541 03:05:44 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:50.541 03:05:44 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74998' 00:17:50.541 03:05:44 ftl -- common/autotest_common.sh@973 -- # kill 74998 00:17:50.541 03:05:44 ftl -- common/autotest_common.sh@978 -- # wait 74998 00:17:51.924 03:05:45 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:51.924 03:05:45 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:51.924 03:05:45 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:51.924 03:05:45 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.924 03:05:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:51.924 ************************************ 00:17:51.924 START TEST ftl_fio_basic 00:17:51.924 ************************************ 00:17:51.924 03:05:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:51.924 * Looking for test storage... 00:17:51.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:51.924 03:05:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:51.924 03:05:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:17:51.924 03:05:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:51.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.924 --rc genhtml_branch_coverage=1 00:17:51.924 --rc genhtml_function_coverage=1 00:17:51.924 --rc genhtml_legend=1 00:17:51.924 --rc geninfo_all_blocks=1 00:17:51.924 --rc geninfo_unexecuted_blocks=1 00:17:51.924 00:17:51.924 ' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:51.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.924 --rc genhtml_branch_coverage=1 00:17:51.924 --rc genhtml_function_coverage=1 00:17:51.924 --rc genhtml_legend=1 00:17:51.924 --rc geninfo_all_blocks=1 00:17:51.924 --rc geninfo_unexecuted_blocks=1 00:17:51.924 00:17:51.924 ' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:51.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.924 --rc genhtml_branch_coverage=1 00:17:51.924 --rc genhtml_function_coverage=1 00:17:51.924 --rc genhtml_legend=1 00:17:51.924 --rc geninfo_all_blocks=1 00:17:51.924 --rc geninfo_unexecuted_blocks=1 00:17:51.924 00:17:51.924 ' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:51.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.924 --rc genhtml_branch_coverage=1 00:17:51.924 --rc genhtml_function_coverage=1 00:17:51.924 --rc genhtml_legend=1 00:17:51.924 --rc geninfo_all_blocks=1 00:17:51.924 --rc geninfo_unexecuted_blocks=1 00:17:51.924 00:17:51.924 ' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75131 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75131 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75131 ']' 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.924 03:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.925 03:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.925 03:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.925 03:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:51.925 [2024-12-10 03:05:46.146994] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:51.925 [2024-12-10 03:05:46.147315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75131 ] 00:17:52.185 [2024-12-10 03:05:46.306866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:52.185 [2024-12-10 03:05:46.394072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.185 [2024-12-10 03:05:46.394363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.185 [2024-12-10 03:05:46.394404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.755 03:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.755 03:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:52.755 03:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:52.755 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:52.755 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:52.755 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:52.755 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:52.755 03:05:46 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:53.015 03:05:47 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:53.015 03:05:47 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:53.015 03:05:47 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:53.015 03:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:53.015 03:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:53.015 03:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:53.015 03:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:53.015 03:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:53.275 03:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:53.275 { 00:17:53.275 "name": "nvme0n1", 00:17:53.275 "aliases": [ 00:17:53.275 "201243a6-b3d8-4e5a-a50e-dd8e5505fd7c" 00:17:53.275 ], 00:17:53.275 "product_name": "NVMe disk", 00:17:53.275 "block_size": 4096, 00:17:53.275 "num_blocks": 1310720, 00:17:53.275 "uuid": "201243a6-b3d8-4e5a-a50e-dd8e5505fd7c", 00:17:53.275 "numa_id": -1, 00:17:53.275 "assigned_rate_limits": { 00:17:53.275 "rw_ios_per_sec": 0, 00:17:53.275 "rw_mbytes_per_sec": 0, 00:17:53.275 "r_mbytes_per_sec": 0, 00:17:53.275 "w_mbytes_per_sec": 0 00:17:53.275 }, 00:17:53.275 "claimed": false, 00:17:53.275 "zoned": false, 00:17:53.275 "supported_io_types": { 00:17:53.275 "read": true, 00:17:53.275 "write": true, 00:17:53.275 "unmap": true, 00:17:53.275 "flush": true, 00:17:53.275 "reset": true, 00:17:53.275 "nvme_admin": true, 00:17:53.275 "nvme_io": true, 00:17:53.275 "nvme_io_md": false, 00:17:53.275 "write_zeroes": true, 00:17:53.275 "zcopy": false, 00:17:53.275 "get_zone_info": false, 00:17:53.275 "zone_management": false, 00:17:53.275 "zone_append": false, 00:17:53.275 "compare": true, 00:17:53.275 "compare_and_write": false, 00:17:53.275 "abort": true, 00:17:53.275 "seek_hole": false, 00:17:53.275 "seek_data": false, 00:17:53.275 "copy": true, 00:17:53.275 "nvme_iov_md": false 00:17:53.275 }, 00:17:53.275 "driver_specific": { 00:17:53.275 "nvme": [ 00:17:53.275 { 00:17:53.275 "pci_address": "0000:00:11.0", 00:17:53.275 "trid": { 00:17:53.275 "trtype": "PCIe", 00:17:53.275 "traddr": "0000:00:11.0" 00:17:53.275 }, 00:17:53.275 "ctrlr_data": { 00:17:53.275 "cntlid": 0, 00:17:53.275 "vendor_id": "0x1b36", 00:17:53.275 "model_number": "QEMU NVMe Ctrl", 00:17:53.275 "serial_number": "12341", 00:17:53.275 "firmware_revision": "8.0.0", 00:17:53.275 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:53.275 "oacs": { 00:17:53.275 "security": 0, 00:17:53.275 "format": 1, 00:17:53.275 "firmware": 0, 00:17:53.275 "ns_manage": 1 00:17:53.275 }, 00:17:53.275 "multi_ctrlr": false, 00:17:53.275 "ana_reporting": false 00:17:53.275 }, 00:17:53.275 "vs": { 00:17:53.275 "nvme_version": "1.4" 00:17:53.275 }, 00:17:53.275 "ns_data": { 00:17:53.275 "id": 1, 00:17:53.275 "can_share": false 00:17:53.275 } 00:17:53.275 } 00:17:53.275 ], 00:17:53.275 "mp_policy": "active_passive" 00:17:53.275 } 00:17:53.275 } 00:17:53.275 ]' 00:17:53.275 03:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:53.275 03:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:53.275 03:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:53.275 03:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:53.275 03:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:53.275 03:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:53.275 03:05:47 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:53.275 03:05:47 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:53.275 03:05:47 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:53.275 03:05:47 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:53.275 03:05:47 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:53.537 03:05:47 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:53.537 03:05:47 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:53.797 03:05:47 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=ffc353cd-de3b-46c2-9b02-c3b2c14000b5 00:17:53.797 03:05:47 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ffc353cd-de3b-46c2-9b02-c3b2c14000b5 00:17:53.797 03:05:48 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=7c477216-147e-4387-9665-8d823def468a 00:17:53.797 03:05:48 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7c477216-147e-4387-9665-8d823def468a 00:17:53.797 03:05:48 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:53.797 03:05:48 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:53.797 03:05:48 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=7c477216-147e-4387-9665-8d823def468a 00:17:53.797 03:05:48 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:53.797 03:05:48 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 7c477216-147e-4387-9665-8d823def468a 00:17:53.797 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=7c477216-147e-4387-9665-8d823def468a 00:17:53.797 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:53.797 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:53.797 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:53.797 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7c477216-147e-4387-9665-8d823def468a 00:17:54.058 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:54.058 { 00:17:54.058 "name": "7c477216-147e-4387-9665-8d823def468a", 00:17:54.058 "aliases": [ 00:17:54.058 "lvs/nvme0n1p0" 00:17:54.058 ], 00:17:54.058 "product_name": "Logical Volume", 00:17:54.058 "block_size": 4096, 00:17:54.058 "num_blocks": 26476544, 00:17:54.058 "uuid": "7c477216-147e-4387-9665-8d823def468a", 00:17:54.058 "assigned_rate_limits": { 00:17:54.058 "rw_ios_per_sec": 0, 00:17:54.058 "rw_mbytes_per_sec": 0, 00:17:54.058 "r_mbytes_per_sec": 0, 00:17:54.058 "w_mbytes_per_sec": 0 00:17:54.058 }, 00:17:54.058 "claimed": false, 00:17:54.058 "zoned": false, 00:17:54.058 "supported_io_types": { 00:17:54.058 "read": true, 00:17:54.058 "write": true, 00:17:54.058 "unmap": true, 00:17:54.058 "flush": false, 00:17:54.058 "reset": true, 00:17:54.058 "nvme_admin": false, 00:17:54.058 "nvme_io": false, 00:17:54.058 "nvme_io_md": false, 00:17:54.058 "write_zeroes": true, 00:17:54.058 "zcopy": false, 00:17:54.058 "get_zone_info": false, 00:17:54.058 "zone_management": false, 00:17:54.058 "zone_append": false, 00:17:54.058 "compare": false, 00:17:54.058 "compare_and_write": false, 00:17:54.058 "abort": false, 00:17:54.058 "seek_hole": true, 00:17:54.058 "seek_data": true, 00:17:54.058 "copy": false, 00:17:54.058 "nvme_iov_md": false 00:17:54.058 }, 00:17:54.058 "driver_specific": { 00:17:54.058 "lvol": { 00:17:54.058 "lvol_store_uuid": "ffc353cd-de3b-46c2-9b02-c3b2c14000b5", 00:17:54.058 "base_bdev": "nvme0n1", 00:17:54.058 "thin_provision": true, 00:17:54.058 "num_allocated_clusters": 0, 00:17:54.058 "snapshot": false, 00:17:54.058 "clone": false, 00:17:54.058 "esnap_clone": false 00:17:54.058 } 00:17:54.058 } 00:17:54.058 } 00:17:54.058 ]' 00:17:54.058 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:54.058 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:54.058 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:54.058 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:54.058 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:54.058 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:54.058 03:05:48 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:54.058 03:05:48 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:54.058 03:05:48 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:54.320 03:05:48 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:54.320 03:05:48 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:54.320 03:05:48 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 7c477216-147e-4387-9665-8d823def468a 00:17:54.320 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=7c477216-147e-4387-9665-8d823def468a 00:17:54.320 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:54.320 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:54.320 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:54.320 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7c477216-147e-4387-9665-8d823def468a 00:17:54.579 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:54.579 { 00:17:54.579 "name": "7c477216-147e-4387-9665-8d823def468a", 00:17:54.579 "aliases": [ 00:17:54.579 "lvs/nvme0n1p0" 00:17:54.579 ], 00:17:54.579 "product_name": "Logical Volume", 00:17:54.579 "block_size": 4096, 00:17:54.579 "num_blocks": 26476544, 00:17:54.579 "uuid": "7c477216-147e-4387-9665-8d823def468a", 00:17:54.579 "assigned_rate_limits": { 00:17:54.579 "rw_ios_per_sec": 0, 00:17:54.579 "rw_mbytes_per_sec": 0, 00:17:54.579 "r_mbytes_per_sec": 0, 00:17:54.579 "w_mbytes_per_sec": 0 00:17:54.579 }, 00:17:54.579 "claimed": false, 00:17:54.579 "zoned": false, 00:17:54.579 "supported_io_types": { 00:17:54.579 "read": true, 00:17:54.579 "write": true, 00:17:54.579 "unmap": true, 00:17:54.579 "flush": false, 00:17:54.579 "reset": true, 00:17:54.579 "nvme_admin": false, 00:17:54.579 "nvme_io": false, 00:17:54.579 "nvme_io_md": false, 00:17:54.579 "write_zeroes": true, 00:17:54.579 "zcopy": false, 00:17:54.579 "get_zone_info": false, 00:17:54.579 "zone_management": false, 00:17:54.579 "zone_append": false, 00:17:54.579 "compare": false, 00:17:54.579 "compare_and_write": false, 00:17:54.579 "abort": false, 00:17:54.579 "seek_hole": true, 00:17:54.579 "seek_data": true, 00:17:54.579 "copy": false, 00:17:54.579 "nvme_iov_md": false 00:17:54.579 }, 00:17:54.579 "driver_specific": { 00:17:54.579 "lvol": { 00:17:54.579 "lvol_store_uuid": "ffc353cd-de3b-46c2-9b02-c3b2c14000b5", 00:17:54.579 "base_bdev": "nvme0n1", 00:17:54.579 "thin_provision": true, 00:17:54.579 "num_allocated_clusters": 0, 00:17:54.579 "snapshot": false, 00:17:54.579 "clone": false, 00:17:54.579 "esnap_clone": false 00:17:54.579 } 00:17:54.579 } 00:17:54.579 } 00:17:54.579 ]' 00:17:54.579 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:54.579 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:54.579 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:54.579 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:54.579 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:54.579 03:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:54.579 03:05:48 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:54.579 03:05:48 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:54.838 03:05:49 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:54.838 03:05:49 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:54.838 03:05:49 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:54.838 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:54.838 03:05:49 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 7c477216-147e-4387-9665-8d823def468a 00:17:54.838 03:05:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=7c477216-147e-4387-9665-8d823def468a 00:17:54.838 03:05:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:54.838 03:05:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:54.838 03:05:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:54.838 03:05:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7c477216-147e-4387-9665-8d823def468a 00:17:55.096 03:05:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:55.096 { 00:17:55.096 "name": "7c477216-147e-4387-9665-8d823def468a", 00:17:55.096 "aliases": [ 00:17:55.096 "lvs/nvme0n1p0" 00:17:55.096 ], 00:17:55.096 "product_name": "Logical Volume", 00:17:55.096 "block_size": 4096, 00:17:55.096 "num_blocks": 26476544, 00:17:55.096 "uuid": "7c477216-147e-4387-9665-8d823def468a", 00:17:55.096 "assigned_rate_limits": { 00:17:55.096 "rw_ios_per_sec": 0, 00:17:55.096 "rw_mbytes_per_sec": 0, 00:17:55.096 "r_mbytes_per_sec": 0, 00:17:55.096 "w_mbytes_per_sec": 0 00:17:55.096 }, 00:17:55.096 "claimed": false, 00:17:55.096 "zoned": false, 00:17:55.096 "supported_io_types": { 00:17:55.096 "read": true, 00:17:55.096 "write": true, 00:17:55.096 "unmap": true, 00:17:55.096 "flush": false, 00:17:55.096 "reset": true, 00:17:55.096 "nvme_admin": false, 00:17:55.096 "nvme_io": false, 00:17:55.096 "nvme_io_md": false, 00:17:55.096 "write_zeroes": true, 00:17:55.096 "zcopy": false, 00:17:55.096 "get_zone_info": false, 00:17:55.096 "zone_management": false, 00:17:55.096 "zone_append": false, 00:17:55.096 "compare": false, 00:17:55.096 "compare_and_write": false, 00:17:55.096 "abort": false, 00:17:55.096 "seek_hole": true, 00:17:55.096 "seek_data": true, 00:17:55.096 "copy": false, 00:17:55.096 "nvme_iov_md": false 00:17:55.096 }, 00:17:55.096 "driver_specific": { 00:17:55.096 "lvol": { 00:17:55.096 "lvol_store_uuid": "ffc353cd-de3b-46c2-9b02-c3b2c14000b5", 00:17:55.096 "base_bdev": "nvme0n1", 00:17:55.096 "thin_provision": true, 00:17:55.096 "num_allocated_clusters": 0, 00:17:55.096 "snapshot": false, 00:17:55.096 "clone": false, 00:17:55.096 "esnap_clone": false 00:17:55.096 } 00:17:55.096 } 00:17:55.096 } 00:17:55.096 ]' 00:17:55.096 03:05:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:55.096 03:05:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:55.096 03:05:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:55.096 03:05:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:55.096 03:05:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:55.096 03:05:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:55.096 03:05:49 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:55.096 03:05:49 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:55.096 03:05:49 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7c477216-147e-4387-9665-8d823def468a -c nvc0n1p0 --l2p_dram_limit 60 00:17:55.355 [2024-12-10 03:05:49.588146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.355 [2024-12-10 03:05:49.588183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:55.355 [2024-12-10 03:05:49.588197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:55.355 [2024-12-10 03:05:49.588204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.355 [2024-12-10 03:05:49.588255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.355 [2024-12-10 03:05:49.588265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:55.355 [2024-12-10 03:05:49.588273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:55.355 [2024-12-10 03:05:49.588279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.355 [2024-12-10 03:05:49.588309] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:55.355 [2024-12-10 03:05:49.589398] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:55.355 [2024-12-10 03:05:49.589424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.355 [2024-12-10 03:05:49.589431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:55.355 [2024-12-10 03:05:49.589439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:17:55.355 [2024-12-10 03:05:49.589444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.355 [2024-12-10 03:05:49.589506] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9176766d-4b1f-4478-ad82-28545fbfbf80 00:17:55.355 [2024-12-10 03:05:49.590540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.355 [2024-12-10 03:05:49.590569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:55.355 [2024-12-10 03:05:49.590577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:55.355 [2024-12-10 03:05:49.590585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.355 [2024-12-10 03:05:49.595229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.355 [2024-12-10 03:05:49.595258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:55.355 [2024-12-10 03:05:49.595266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.594 ms 00:17:55.355 [2024-12-10 03:05:49.595273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.355 [2024-12-10 03:05:49.595350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.355 [2024-12-10 03:05:49.595358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:55.355 [2024-12-10 03:05:49.595365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:55.355 [2024-12-10 03:05:49.595383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.355 [2024-12-10 03:05:49.595429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.355 [2024-12-10 03:05:49.595438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:55.355 [2024-12-10 03:05:49.595445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:55.355 [2024-12-10 03:05:49.595451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.355 [2024-12-10 03:05:49.595475] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:55.355 [2024-12-10 03:05:49.598306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.355 [2024-12-10 03:05:49.598330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:55.355 [2024-12-10 03:05:49.598339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.832 ms 00:17:55.355 [2024-12-10 03:05:49.598347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.355 [2024-12-10 03:05:49.598390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.355 [2024-12-10 03:05:49.598397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:55.355 [2024-12-10 03:05:49.598405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:55.355 [2024-12-10 03:05:49.598411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.355 [2024-12-10 03:05:49.598426] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:55.355 [2024-12-10 03:05:49.598545] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:55.355 [2024-12-10 03:05:49.598557] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:55.355 [2024-12-10 03:05:49.598565] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:55.355 [2024-12-10 03:05:49.598574] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:55.355 [2024-12-10 03:05:49.598581] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:55.355 [2024-12-10 03:05:49.598590] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:55.355 [2024-12-10 03:05:49.598596] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:55.355 [2024-12-10 03:05:49.598603] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:55.355 [2024-12-10 03:05:49.598608] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:55.355 [2024-12-10 03:05:49.598615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.355 [2024-12-10 03:05:49.598623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:55.355 [2024-12-10 03:05:49.598630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:17:55.355 [2024-12-10 03:05:49.598636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.355 [2024-12-10 03:05:49.598710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.355 [2024-12-10 03:05:49.598716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:55.355 [2024-12-10 03:05:49.598724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:55.355 [2024-12-10 03:05:49.598730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.355 [2024-12-10 03:05:49.598819] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:55.355 [2024-12-10 03:05:49.598826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:55.355 [2024-12-10 03:05:49.598835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.355 [2024-12-10 03:05:49.598841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.355 [2024-12-10 03:05:49.598848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:55.355 [2024-12-10 03:05:49.598853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:55.355 [2024-12-10 03:05:49.598860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:55.355 [2024-12-10 03:05:49.598868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:55.355 [2024-12-10 03:05:49.598875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:55.355 [2024-12-10 03:05:49.598880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.355 [2024-12-10 03:05:49.598887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:55.355 [2024-12-10 03:05:49.598892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:55.355 [2024-12-10 03:05:49.598898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.355 [2024-12-10 03:05:49.598903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:55.355 [2024-12-10 03:05:49.598910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:55.355 [2024-12-10 03:05:49.598915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.355 [2024-12-10 03:05:49.598924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:55.355 [2024-12-10 03:05:49.598929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:55.355 [2024-12-10 03:05:49.598935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.355 [2024-12-10 03:05:49.598940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:55.355 [2024-12-10 03:05:49.598947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:55.355 [2024-12-10 03:05:49.598952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.355 [2024-12-10 03:05:49.598958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:55.355 [2024-12-10 03:05:49.598963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:55.355 [2024-12-10 03:05:49.598970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.355 [2024-12-10 03:05:49.598975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:55.355 [2024-12-10 03:05:49.598981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:55.355 [2024-12-10 03:05:49.598986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.355 [2024-12-10 03:05:49.598993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:55.355 [2024-12-10 03:05:49.598998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:55.355 [2024-12-10 03:05:49.599004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.355 [2024-12-10 03:05:49.599009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:55.355 [2024-12-10 03:05:49.599016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:55.355 [2024-12-10 03:05:49.599032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.355 [2024-12-10 03:05:49.599039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:55.355 [2024-12-10 03:05:49.599044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:55.355 [2024-12-10 03:05:49.599050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.356 [2024-12-10 03:05:49.599055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:55.356 [2024-12-10 03:05:49.599061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:55.356 [2024-12-10 03:05:49.599067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.356 [2024-12-10 03:05:49.599074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:55.356 [2024-12-10 03:05:49.599079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:55.356 [2024-12-10 03:05:49.599086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.356 [2024-12-10 03:05:49.599091] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:55.356 [2024-12-10 03:05:49.599099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:55.356 [2024-12-10 03:05:49.599105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.356 [2024-12-10 03:05:49.599111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.356 [2024-12-10 03:05:49.599117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:55.356 [2024-12-10 03:05:49.599125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:55.356 [2024-12-10 03:05:49.599131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:55.356 [2024-12-10 03:05:49.599138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:55.356 [2024-12-10 03:05:49.599143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:55.356 [2024-12-10 03:05:49.599149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:55.356 [2024-12-10 03:05:49.599156] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:55.356 [2024-12-10 03:05:49.599165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.356 [2024-12-10 03:05:49.599171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:55.356 [2024-12-10 03:05:49.599178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:55.356 [2024-12-10 03:05:49.599184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:55.356 [2024-12-10 03:05:49.599190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:55.356 [2024-12-10 03:05:49.599196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:55.356 [2024-12-10 03:05:49.599203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:55.356 [2024-12-10 03:05:49.599208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:55.356 [2024-12-10 03:05:49.599215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:55.356 [2024-12-10 03:05:49.599220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:55.356 [2024-12-10 03:05:49.599228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:55.356 [2024-12-10 03:05:49.599233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:55.356 [2024-12-10 03:05:49.599240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:55.356 [2024-12-10 03:05:49.599246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:55.356 [2024-12-10 03:05:49.599252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:55.356 [2024-12-10 03:05:49.599258] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:55.356 [2024-12-10 03:05:49.599265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.356 [2024-12-10 03:05:49.599275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:55.356 [2024-12-10 03:05:49.599282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:55.356 [2024-12-10 03:05:49.599287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:55.356 [2024-12-10 03:05:49.599294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:55.356 [2024-12-10 03:05:49.599299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.356 [2024-12-10 03:05:49.599306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:55.356 [2024-12-10 03:05:49.599312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:17:55.356 [2024-12-10 03:05:49.599319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.356 [2024-12-10 03:05:49.599384] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:55.356 [2024-12-10 03:05:49.599397] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:58.637 [2024-12-10 03:05:52.915470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.637 [2024-12-10 03:05:52.915534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:58.637 [2024-12-10 03:05:52.915550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3316.069 ms 00:17:58.637 [2024-12-10 03:05:52.915559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.637 [2024-12-10 03:05:52.940827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.637 [2024-12-10 03:05:52.940872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.637 [2024-12-10 03:05:52.940884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.058 ms 00:17:58.637 [2024-12-10 03:05:52.940894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.637 [2024-12-10 03:05:52.941017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.637 [2024-12-10 03:05:52.941030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:58.637 [2024-12-10 03:05:52.941039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:58.637 [2024-12-10 03:05:52.941050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.637 [2024-12-10 03:05:52.983436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.637 [2024-12-10 03:05:52.983478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.637 [2024-12-10 03:05:52.983494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.334 ms 00:17:58.637 [2024-12-10 03:05:52.983503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.637 [2024-12-10 03:05:52.983540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.637 [2024-12-10 03:05:52.983551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:58.637 [2024-12-10 03:05:52.983559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:58.637 [2024-12-10 03:05:52.983568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.637 [2024-12-10 03:05:52.983939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.637 [2024-12-10 03:05:52.983958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:58.637 [2024-12-10 03:05:52.983967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:17:58.637 [2024-12-10 03:05:52.983978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.637 [2024-12-10 03:05:52.984100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.637 [2024-12-10 03:05:52.984111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:58.637 [2024-12-10 03:05:52.984120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:58.637 [2024-12-10 03:05:52.984130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.637 [2024-12-10 03:05:52.998345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.637 [2024-12-10 03:05:52.998530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:58.637 [2024-12-10 03:05:52.998547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.195 ms 00:17:58.637 [2024-12-10 03:05:52.998557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.637 [2024-12-10 03:05:53.009788] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:58.895 [2024-12-10 03:05:53.023711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.895 [2024-12-10 03:05:53.023743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:58.895 [2024-12-10 03:05:53.023759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.073 ms 00:17:58.895 [2024-12-10 03:05:53.023766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.895 [2024-12-10 03:05:53.083624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.895 [2024-12-10 03:05:53.083669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:58.895 [2024-12-10 03:05:53.083687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.823 ms 00:17:58.895 [2024-12-10 03:05:53.083695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.895 [2024-12-10 03:05:53.083884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.895 [2024-12-10 03:05:53.083896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:58.895 [2024-12-10 03:05:53.083908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:17:58.895 [2024-12-10 03:05:53.083915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.895 [2024-12-10 03:05:53.106933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.895 [2024-12-10 03:05:53.106968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:58.895 [2024-12-10 03:05:53.106980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.978 ms 00:17:58.895 [2024-12-10 03:05:53.106989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.895 [2024-12-10 03:05:53.129492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.895 [2024-12-10 03:05:53.129522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:58.895 [2024-12-10 03:05:53.129534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.463 ms 00:17:58.895 [2024-12-10 03:05:53.129541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.895 [2024-12-10 03:05:53.130102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.895 [2024-12-10 03:05:53.130116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:58.895 [2024-12-10 03:05:53.130127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:17:58.895 [2024-12-10 03:05:53.130135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.895 [2024-12-10 03:05:53.195702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.895 [2024-12-10 03:05:53.195735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:58.895 [2024-12-10 03:05:53.195750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.529 ms 00:17:58.895 [2024-12-10 03:05:53.195760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.895 [2024-12-10 03:05:53.219400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.895 [2024-12-10 03:05:53.219430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:58.895 [2024-12-10 03:05:53.219442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.565 ms 00:17:58.895 [2024-12-10 03:05:53.219450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.895 [2024-12-10 03:05:53.241872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.895 [2024-12-10 03:05:53.241900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:58.895 [2024-12-10 03:05:53.241911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.382 ms 00:17:58.895 [2024-12-10 03:05:53.241919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.895 [2024-12-10 03:05:53.264422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.895 [2024-12-10 03:05:53.264450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:58.895 [2024-12-10 03:05:53.264461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.462 ms 00:17:58.895 [2024-12-10 03:05:53.264468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.895 [2024-12-10 03:05:53.264513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.895 [2024-12-10 03:05:53.264522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:58.895 [2024-12-10 03:05:53.264536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:58.895 [2024-12-10 03:05:53.264543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.895 [2024-12-10 03:05:53.264618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.895 [2024-12-10 03:05:53.264627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:58.895 [2024-12-10 03:05:53.264636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:58.895 [2024-12-10 03:05:53.264644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.895 [2024-12-10 03:05:53.265563] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3676.958 ms, result 0 00:17:58.895 { 00:17:58.895 "name": "ftl0", 00:17:58.895 "uuid": "9176766d-4b1f-4478-ad82-28545fbfbf80" 00:17:58.895 } 00:17:59.153 03:05:53 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:59.153 03:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:59.153 03:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:59.153 03:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:59.154 03:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:59.154 03:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:59.154 03:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:59.154 03:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:59.411 [ 00:17:59.411 { 00:17:59.411 "name": "ftl0", 00:17:59.411 "aliases": [ 00:17:59.411 "9176766d-4b1f-4478-ad82-28545fbfbf80" 00:17:59.411 ], 00:17:59.411 "product_name": "FTL disk", 00:17:59.411 "block_size": 4096, 00:17:59.411 "num_blocks": 20971520, 00:17:59.411 "uuid": "9176766d-4b1f-4478-ad82-28545fbfbf80", 00:17:59.411 "assigned_rate_limits": { 00:17:59.411 "rw_ios_per_sec": 0, 00:17:59.411 "rw_mbytes_per_sec": 0, 00:17:59.411 "r_mbytes_per_sec": 0, 00:17:59.411 "w_mbytes_per_sec": 0 00:17:59.411 }, 00:17:59.411 "claimed": false, 00:17:59.411 "zoned": false, 00:17:59.411 "supported_io_types": { 00:17:59.411 "read": true, 00:17:59.411 "write": true, 00:17:59.411 "unmap": true, 00:17:59.411 "flush": true, 00:17:59.411 "reset": false, 00:17:59.411 "nvme_admin": false, 00:17:59.411 "nvme_io": false, 00:17:59.411 "nvme_io_md": false, 00:17:59.411 "write_zeroes": true, 00:17:59.411 "zcopy": false, 00:17:59.411 "get_zone_info": false, 00:17:59.411 "zone_management": false, 00:17:59.411 "zone_append": false, 00:17:59.411 "compare": false, 00:17:59.411 "compare_and_write": false, 00:17:59.411 "abort": false, 00:17:59.411 "seek_hole": false, 00:17:59.411 "seek_data": false, 00:17:59.411 "copy": false, 00:17:59.411 "nvme_iov_md": false 00:17:59.411 }, 00:17:59.411 "driver_specific": { 00:17:59.411 "ftl": { 00:17:59.411 "base_bdev": "7c477216-147e-4387-9665-8d823def468a", 00:17:59.411 "cache": "nvc0n1p0" 00:17:59.411 } 00:17:59.411 } 00:17:59.411 } 00:17:59.411 ] 00:17:59.411 03:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:59.411 03:05:53 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:59.411 03:05:53 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:59.670 03:05:53 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:59.670 03:05:53 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:59.935 [2024-12-10 03:05:54.074199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.935 [2024-12-10 03:05:54.074242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:59.935 [2024-12-10 03:05:54.074254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:59.935 [2024-12-10 03:05:54.074264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.935 [2024-12-10 03:05:54.074296] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:59.935 [2024-12-10 03:05:54.076889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.935 [2024-12-10 03:05:54.076919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:59.935 [2024-12-10 03:05:54.076933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.576 ms 00:17:59.935 [2024-12-10 03:05:54.076942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.935 [2024-12-10 03:05:54.077335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.935 [2024-12-10 03:05:54.077351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:59.935 [2024-12-10 03:05:54.077361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:17:59.935 [2024-12-10 03:05:54.077368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.935 [2024-12-10 03:05:54.080929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.935 [2024-12-10 03:05:54.080951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:59.935 [2024-12-10 03:05:54.080962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.526 ms 00:17:59.935 [2024-12-10 03:05:54.080970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.935 [2024-12-10 03:05:54.087121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.935 [2024-12-10 03:05:54.087146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:59.935 [2024-12-10 03:05:54.087158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.127 ms 00:17:59.935 [2024-12-10 03:05:54.087166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.935 [2024-12-10 03:05:54.110639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.935 [2024-12-10 03:05:54.110672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:59.935 [2024-12-10 03:05:54.110696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.396 ms 00:17:59.935 [2024-12-10 03:05:54.110703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.935 [2024-12-10 03:05:54.125523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.935 [2024-12-10 03:05:54.125648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:59.935 [2024-12-10 03:05:54.125669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.774 ms 00:17:59.935 [2024-12-10 03:05:54.125677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.935 [2024-12-10 03:05:54.125849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.935 [2024-12-10 03:05:54.125860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:59.935 [2024-12-10 03:05:54.125870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:17:59.935 [2024-12-10 03:05:54.125878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.935 [2024-12-10 03:05:54.148744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.935 [2024-12-10 03:05:54.148850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:59.935 [2024-12-10 03:05:54.148867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.845 ms 00:17:59.935 [2024-12-10 03:05:54.148874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.935 [2024-12-10 03:05:54.171363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.935 [2024-12-10 03:05:54.171404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:59.935 [2024-12-10 03:05:54.171416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.455 ms 00:17:59.935 [2024-12-10 03:05:54.171437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.935 [2024-12-10 03:05:54.193236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.935 [2024-12-10 03:05:54.193264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:59.935 [2024-12-10 03:05:54.193276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.759 ms 00:17:59.935 [2024-12-10 03:05:54.193283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.935 [2024-12-10 03:05:54.215258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.935 [2024-12-10 03:05:54.215364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:59.935 [2024-12-10 03:05:54.215402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.891 ms 00:17:59.935 [2024-12-10 03:05:54.215409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.935 [2024-12-10 03:05:54.215445] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:59.935 [2024-12-10 03:05:54.215458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:59.935 [2024-12-10 03:05:54.215858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.215872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.215881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.215889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.215898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.215905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.215914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.215921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.215931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.215939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.215949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.215956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.215965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.215972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.215984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.215992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:59.936 [2024-12-10 03:05:54.216330] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:59.936 [2024-12-10 03:05:54.216339] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9176766d-4b1f-4478-ad82-28545fbfbf80 00:17:59.936 [2024-12-10 03:05:54.216347] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:59.936 [2024-12-10 03:05:54.216357] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:59.936 [2024-12-10 03:05:54.216364] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:59.936 [2024-12-10 03:05:54.216385] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:59.936 [2024-12-10 03:05:54.216392] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:59.936 [2024-12-10 03:05:54.216401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:59.936 [2024-12-10 03:05:54.216408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:59.936 [2024-12-10 03:05:54.216416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:59.936 [2024-12-10 03:05:54.216422] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:59.936 [2024-12-10 03:05:54.216431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.936 [2024-12-10 03:05:54.216438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:59.936 [2024-12-10 03:05:54.216448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:17:59.936 [2024-12-10 03:05:54.216455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.936 [2024-12-10 03:05:54.229042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.936 [2024-12-10 03:05:54.229140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:59.936 [2024-12-10 03:05:54.229191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.552 ms 00:17:59.936 [2024-12-10 03:05:54.229214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.936 [2024-12-10 03:05:54.229592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.936 [2024-12-10 03:05:54.229621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:59.936 [2024-12-10 03:05:54.229676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:17:59.936 [2024-12-10 03:05:54.229697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.936 [2024-12-10 03:05:54.273284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.936 [2024-12-10 03:05:54.273395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.936 [2024-12-10 03:05:54.273446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.936 [2024-12-10 03:05:54.273468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.936 [2024-12-10 03:05:54.273536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.936 [2024-12-10 03:05:54.273557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.936 [2024-12-10 03:05:54.273613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.936 [2024-12-10 03:05:54.273635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.936 [2024-12-10 03:05:54.273737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.936 [2024-12-10 03:05:54.273767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.936 [2024-12-10 03:05:54.273788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.936 [2024-12-10 03:05:54.273841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.936 [2024-12-10 03:05:54.273883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.936 [2024-12-10 03:05:54.273904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.936 [2024-12-10 03:05:54.273960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.936 [2024-12-10 03:05:54.273983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.257 [2024-12-10 03:05:54.353674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.257 [2024-12-10 03:05:54.353811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:00.257 [2024-12-10 03:05:54.353862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.257 [2024-12-10 03:05:54.353884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.257 [2024-12-10 03:05:54.414970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.257 [2024-12-10 03:05:54.415105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:00.257 [2024-12-10 03:05:54.415155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.257 [2024-12-10 03:05:54.415178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.257 [2024-12-10 03:05:54.415259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.257 [2024-12-10 03:05:54.415284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:00.257 [2024-12-10 03:05:54.415308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.257 [2024-12-10 03:05:54.415327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.257 [2024-12-10 03:05:54.415424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.257 [2024-12-10 03:05:54.415502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:00.257 [2024-12-10 03:05:54.415529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.257 [2024-12-10 03:05:54.415548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.257 [2024-12-10 03:05:54.415662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.257 [2024-12-10 03:05:54.415687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:00.257 [2024-12-10 03:05:54.415709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.257 [2024-12-10 03:05:54.415729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.257 [2024-12-10 03:05:54.415830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.257 [2024-12-10 03:05:54.415923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:00.257 [2024-12-10 03:05:54.415976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.257 [2024-12-10 03:05:54.415998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.257 [2024-12-10 03:05:54.416076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.257 [2024-12-10 03:05:54.416131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:00.257 [2024-12-10 03:05:54.416192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.257 [2024-12-10 03:05:54.416216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.257 [2024-12-10 03:05:54.416281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.257 [2024-12-10 03:05:54.416339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:00.257 [2024-12-10 03:05:54.416364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.257 [2024-12-10 03:05:54.416402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.257 [2024-12-10 03:05:54.416630] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.399 ms, result 0 00:18:00.257 true 00:18:00.257 03:05:54 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75131 00:18:00.257 03:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75131 ']' 00:18:00.257 03:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75131 00:18:00.257 03:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:00.257 03:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.257 03:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75131 00:18:00.257 killing process with pid 75131 00:18:00.257 03:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.257 03:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.257 03:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75131' 00:18:00.257 03:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75131 00:18:00.257 03:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75131 00:18:06.845 03:06:00 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:06.845 03:06:00 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:06.845 03:06:00 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:06.845 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:06.846 03:06:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:06.846 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:06.846 fio-3.35 00:18:06.846 Starting 1 thread 00:18:11.036 00:18:11.036 test: (groupid=0, jobs=1): err= 0: pid=75326: Tue Dec 10 03:06:05 2024 00:18:11.036 read: IOPS=1104, BW=73.3MiB/s (76.9MB/s)(255MiB/3471msec) 00:18:11.036 slat (nsec): min=3065, max=37348, avg=4583.36, stdev=2488.61 00:18:11.036 clat (usec): min=254, max=1268, avg=403.20, stdev=118.97 00:18:11.036 lat (usec): min=257, max=1274, avg=407.78, stdev=120.21 00:18:11.036 clat percentiles (usec): 00:18:11.036 | 1.00th=[ 310], 5.00th=[ 314], 10.00th=[ 314], 20.00th=[ 318], 00:18:11.036 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 347], 00:18:11.036 | 70.00th=[ 474], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 570], 00:18:11.036 | 99.00th=[ 873], 99.50th=[ 938], 99.90th=[ 1106], 99.95th=[ 1205], 00:18:11.036 | 99.99th=[ 1270] 00:18:11.036 write: IOPS=1111, BW=73.8MiB/s (77.4MB/s)(256MiB/3468msec); 0 zone resets 00:18:11.036 slat (nsec): min=13897, max=67458, avg=19026.17, stdev=4427.76 00:18:11.036 clat (usec): min=298, max=2117, avg=462.39, stdev=186.74 00:18:11.036 lat (usec): min=322, max=2144, avg=481.42, stdev=189.39 00:18:11.036 clat percentiles (usec): 00:18:11.036 | 1.00th=[ 334], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 347], 00:18:11.036 | 30.00th=[ 351], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 408], 00:18:11.036 | 70.00th=[ 545], 80.00th=[ 619], 90.00th=[ 652], 95.00th=[ 693], 00:18:11.036 | 99.00th=[ 1139], 99.50th=[ 1434], 99.90th=[ 2024], 99.95th=[ 2040], 00:18:11.036 | 99.99th=[ 2114] 00:18:11.036 bw ( KiB/s): min=47776, max=92344, per=96.90%, avg=73265.33, stdev=20812.27, samples=6 00:18:11.036 iops : min= 702, max= 1358, avg=1077.33, stdev=306.21, samples=6 00:18:11.036 lat (usec) : 500=69.62%, 750=27.90%, 1000=1.31% 00:18:11.036 lat (msec) : 2=1.12%, 4=0.05% 00:18:11.036 cpu : usr=99.08%, sys=0.12%, ctx=8, majf=0, minf=1169 00:18:11.036 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:11.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.036 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:11.036 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:11.036 00:18:11.036 Run status group 0 (all jobs): 00:18:11.036 READ: bw=73.3MiB/s (76.9MB/s), 73.3MiB/s-73.3MiB/s (76.9MB/s-76.9MB/s), io=255MiB (267MB), run=3471-3471msec 00:18:11.036 WRITE: bw=73.8MiB/s (77.4MB/s), 73.8MiB/s-73.8MiB/s (77.4MB/s-77.4MB/s), io=256MiB (269MB), run=3468-3468msec 00:18:12.421 ----------------------------------------------------- 00:18:12.421 Suppressions used: 00:18:12.421 count bytes template 00:18:12.421 1 5 /usr/src/fio/parse.c 00:18:12.421 1 8 libtcmalloc_minimal.so 00:18:12.421 1 904 libcrypto.so 00:18:12.421 ----------------------------------------------------- 00:18:12.421 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:12.421 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:12.422 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:12.422 03:06:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:12.682 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:12.682 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:12.682 fio-3.35 00:18:12.682 Starting 2 threads 00:18:39.296 00:18:39.296 first_half: (groupid=0, jobs=1): err= 0: pid=75429: Tue Dec 10 03:06:31 2024 00:18:39.296 read: IOPS=2775, BW=10.8MiB/s (11.4MB/s)(256MiB/23592msec) 00:18:39.296 slat (nsec): min=3023, max=55399, avg=4141.52, stdev=1189.32 00:18:39.296 clat (usec): min=474, max=536553, avg=37874.99, stdev=30843.44 00:18:39.296 lat (usec): min=477, max=536563, avg=37879.13, stdev=30843.67 00:18:39.296 clat percentiles (msec): 00:18:39.296 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 29], 00:18:39.296 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 33], 00:18:39.296 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 44], 95.00th=[ 75], 00:18:39.296 | 99.00th=[ 167], 99.50th=[ 220], 99.90th=[ 426], 99.95th=[ 477], 00:18:39.296 | 99.99th=[ 527] 00:18:39.296 write: IOPS=2781, BW=10.9MiB/s (11.4MB/s)(256MiB/23563msec); 0 zone resets 00:18:39.296 slat (usec): min=3, max=548, avg= 5.67, stdev= 5.31 00:18:39.296 clat (usec): min=361, max=82612, avg=8223.51, stdev=8691.12 00:18:39.296 lat (usec): min=366, max=82618, avg=8229.18, stdev=8691.70 00:18:39.296 clat percentiles (usec): 00:18:39.296 | 1.00th=[ 725], 5.00th=[ 898], 10.00th=[ 1254], 20.00th=[ 2671], 00:18:39.296 | 30.00th=[ 3687], 40.00th=[ 4686], 50.00th=[ 5211], 60.00th=[ 5800], 00:18:39.296 | 70.00th=[ 6915], 80.00th=[13829], 90.00th=[20841], 95.00th=[25560], 00:18:39.296 | 99.00th=[34866], 99.50th=[43254], 99.90th=[76022], 99.95th=[78119], 00:18:39.296 | 99.99th=[80217] 00:18:39.296 bw ( KiB/s): min= 16, max=54312, per=97.53%, avg=21701.67, stdev=15540.47, samples=24 00:18:39.296 iops : min= 4, max=13578, avg=5425.42, stdev=3885.12, samples=24 00:18:39.296 lat (usec) : 500=0.05%, 750=0.71%, 1000=2.52% 00:18:39.296 lat (msec) : 2=4.79%, 4=8.38%, 10=23.11%, 20=6.24%, 50=50.47% 00:18:39.296 lat (msec) : 100=1.91%, 250=1.61%, 500=0.19%, 750=0.02% 00:18:39.296 cpu : usr=99.09%, sys=0.11%, ctx=93, majf=0, minf=5524 00:18:39.296 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:39.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.296 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:39.296 issued rwts: total=65468,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.296 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:39.296 second_half: (groupid=0, jobs=1): err= 0: pid=75430: Tue Dec 10 03:06:31 2024 00:18:39.296 read: IOPS=2795, BW=10.9MiB/s (11.5MB/s)(256MiB/23426msec) 00:18:39.296 slat (nsec): min=3076, max=41444, avg=4673.11, stdev=1361.13 00:18:39.296 clat (msec): min=14, max=394, avg=37.95, stdev=24.36 00:18:39.296 lat (msec): min=14, max=394, avg=37.96, stdev=24.36 00:18:39.296 clat percentiles (msec): 00:18:39.296 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:18:39.296 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 34], 00:18:39.296 | 70.00th=[ 35], 80.00th=[ 38], 90.00th=[ 44], 95.00th=[ 77], 00:18:39.296 | 99.00th=[ 159], 99.50th=[ 176], 99.90th=[ 288], 99.95th=[ 334], 00:18:39.296 | 99.99th=[ 376] 00:18:39.296 write: IOPS=2813, BW=11.0MiB/s (11.5MB/s)(256MiB/23292msec); 0 zone resets 00:18:39.296 slat (usec): min=3, max=3831, avg= 6.47, stdev=26.65 00:18:39.296 clat (usec): min=361, max=62274, avg=7807.02, stdev=6632.38 00:18:39.296 lat (usec): min=369, max=62279, avg=7813.49, stdev=6633.57 00:18:39.296 clat percentiles (usec): 00:18:39.296 | 1.00th=[ 930], 5.00th=[ 2073], 10.00th=[ 2671], 20.00th=[ 3425], 00:18:39.296 | 30.00th=[ 4359], 40.00th=[ 4948], 50.00th=[ 5342], 60.00th=[ 5866], 00:18:39.296 | 70.00th=[ 6980], 80.00th=[10421], 90.00th=[19530], 95.00th=[22676], 00:18:39.296 | 99.00th=[27919], 99.50th=[32113], 99.90th=[49546], 99.95th=[50594], 00:18:39.296 | 99.99th=[61080] 00:18:39.296 bw ( KiB/s): min= 2160, max=41472, per=100.00%, avg=23666.91, stdev=14702.87, samples=22 00:18:39.296 iops : min= 540, max=10368, avg=5916.73, stdev=3675.72, samples=22 00:18:39.296 lat (usec) : 500=0.03%, 750=0.16%, 1000=0.39% 00:18:39.296 lat (msec) : 2=1.73%, 4=10.94%, 10=26.46%, 20=5.78%, 50=50.91% 00:18:39.296 lat (msec) : 100=1.82%, 250=1.73%, 500=0.06% 00:18:39.296 cpu : usr=99.28%, sys=0.19%, ctx=38, majf=0, minf=5581 00:18:39.296 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:39.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.296 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:39.296 issued rwts: total=65491,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.296 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:39.296 00:18:39.296 Run status group 0 (all jobs): 00:18:39.296 READ: bw=21.7MiB/s (22.7MB/s), 10.8MiB/s-10.9MiB/s (11.4MB/s-11.5MB/s), io=512MiB (536MB), run=23426-23592msec 00:18:39.296 WRITE: bw=21.7MiB/s (22.8MB/s), 10.9MiB/s-11.0MiB/s (11.4MB/s-11.5MB/s), io=512MiB (537MB), run=23292-23563msec 00:18:39.557 ----------------------------------------------------- 00:18:39.557 Suppressions used: 00:18:39.557 count bytes template 00:18:39.557 2 10 /usr/src/fio/parse.c 00:18:39.557 3 288 /usr/src/fio/iolog.c 00:18:39.557 1 8 libtcmalloc_minimal.so 00:18:39.557 1 904 libcrypto.so 00:18:39.557 ----------------------------------------------------- 00:18:39.557 00:18:39.557 03:06:33 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:39.557 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:39.557 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:39.816 03:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:39.816 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:39.816 fio-3.35 00:18:39.816 Starting 1 thread 00:18:54.685 00:18:54.685 test: (groupid=0, jobs=1): err= 0: pid=75737: Tue Dec 10 03:06:48 2024 00:18:54.685 read: IOPS=7625, BW=29.8MiB/s (31.2MB/s)(255MiB/8550msec) 00:18:54.685 slat (usec): min=3, max=212, avg= 4.58, stdev= 1.57 00:18:54.685 clat (usec): min=481, max=37692, avg=16776.37, stdev=3071.34 00:18:54.685 lat (usec): min=485, max=37698, avg=16780.95, stdev=3071.54 00:18:54.685 clat percentiles (usec): 00:18:54.685 | 1.00th=[13173], 5.00th=[13304], 10.00th=[13566], 20.00th=[14615], 00:18:54.685 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15795], 60.00th=[16712], 00:18:54.685 | 70.00th=[17433], 80.00th=[18744], 90.00th=[21103], 95.00th=[23200], 00:18:54.685 | 99.00th=[26346], 99.50th=[27657], 99.90th=[32637], 99.95th=[34866], 00:18:54.685 | 99.99th=[36963] 00:18:54.685 write: IOPS=13.6k, BW=53.2MiB/s (55.8MB/s)(256MiB/4809msec); 0 zone resets 00:18:54.685 slat (usec): min=4, max=941, avg= 5.97, stdev= 5.04 00:18:54.685 clat (usec): min=470, max=46645, avg=9345.03, stdev=9751.29 00:18:54.685 lat (usec): min=476, max=46650, avg=9351.00, stdev=9751.32 00:18:54.685 clat percentiles (usec): 00:18:54.685 | 1.00th=[ 619], 5.00th=[ 693], 10.00th=[ 775], 20.00th=[ 938], 00:18:54.685 | 30.00th=[ 1057], 40.00th=[ 1401], 50.00th=[ 7046], 60.00th=[ 9634], 00:18:54.685 | 70.00th=[11994], 80.00th=[14615], 90.00th=[27919], 95.00th=[29492], 00:18:54.685 | 99.00th=[32637], 99.50th=[34866], 99.90th=[38011], 99.95th=[39060], 00:18:54.685 | 99.99th=[45351] 00:18:54.685 bw ( KiB/s): min=34304, max=70472, per=96.18%, avg=52428.80, stdev=11521.53, samples=10 00:18:54.685 iops : min= 8576, max=17618, avg=13107.20, stdev=2880.38, samples=10 00:18:54.685 lat (usec) : 500=0.01%, 750=4.34%, 1000=8.20% 00:18:54.685 lat (msec) : 2=8.02%, 4=0.46%, 10=9.88%, 20=54.01%, 50=15.07% 00:18:54.685 cpu : usr=98.35%, sys=0.52%, ctx=54, majf=0, minf=5565 00:18:54.685 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:54.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.685 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.685 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.685 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.685 00:18:54.685 Run status group 0 (all jobs): 00:18:54.685 READ: bw=29.8MiB/s (31.2MB/s), 29.8MiB/s-29.8MiB/s (31.2MB/s-31.2MB/s), io=255MiB (267MB), run=8550-8550msec 00:18:54.685 WRITE: bw=53.2MiB/s (55.8MB/s), 53.2MiB/s-53.2MiB/s (55.8MB/s-55.8MB/s), io=256MiB (268MB), run=4809-4809msec 00:18:56.598 ----------------------------------------------------- 00:18:56.598 Suppressions used: 00:18:56.598 count bytes template 00:18:56.598 1 5 /usr/src/fio/parse.c 00:18:56.598 2 192 /usr/src/fio/iolog.c 00:18:56.598 1 8 libtcmalloc_minimal.so 00:18:56.598 1 904 libcrypto.so 00:18:56.598 ----------------------------------------------------- 00:18:56.598 00:18:56.598 03:06:50 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:56.598 03:06:50 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:56.598 03:06:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:56.598 03:06:50 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:56.598 Remove shared memory files 00:18:56.598 03:06:50 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:56.598 03:06:50 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:56.598 03:06:50 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:56.598 03:06:50 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:56.598 03:06:50 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57107 /dev/shm/spdk_tgt_trace.pid74051 00:18:56.598 03:06:50 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:56.598 03:06:50 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:56.598 ************************************ 00:18:56.598 END TEST ftl_fio_basic 00:18:56.598 ************************************ 00:18:56.598 00:18:56.598 real 1m4.663s 00:18:56.598 user 2m22.324s 00:18:56.598 sys 0m2.953s 00:18:56.598 03:06:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.598 03:06:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:56.598 03:06:50 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:56.598 03:06:50 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:56.598 03:06:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.598 03:06:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:56.598 ************************************ 00:18:56.598 START TEST ftl_bdevperf 00:18:56.598 ************************************ 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:56.598 * Looking for test storage... 00:18:56.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.598 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:56.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.599 --rc genhtml_branch_coverage=1 00:18:56.599 --rc genhtml_function_coverage=1 00:18:56.599 --rc genhtml_legend=1 00:18:56.599 --rc geninfo_all_blocks=1 00:18:56.599 --rc geninfo_unexecuted_blocks=1 00:18:56.599 00:18:56.599 ' 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:56.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.599 --rc genhtml_branch_coverage=1 00:18:56.599 --rc genhtml_function_coverage=1 00:18:56.599 --rc genhtml_legend=1 00:18:56.599 --rc geninfo_all_blocks=1 00:18:56.599 --rc geninfo_unexecuted_blocks=1 00:18:56.599 00:18:56.599 ' 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:56.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.599 --rc genhtml_branch_coverage=1 00:18:56.599 --rc genhtml_function_coverage=1 00:18:56.599 --rc genhtml_legend=1 00:18:56.599 --rc geninfo_all_blocks=1 00:18:56.599 --rc geninfo_unexecuted_blocks=1 00:18:56.599 00:18:56.599 ' 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:56.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.599 --rc genhtml_branch_coverage=1 00:18:56.599 --rc genhtml_function_coverage=1 00:18:56.599 --rc genhtml_legend=1 00:18:56.599 --rc geninfo_all_blocks=1 00:18:56.599 --rc geninfo_unexecuted_blocks=1 00:18:56.599 00:18:56.599 ' 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75981 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75981 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75981 ']' 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.599 03:06:50 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:56.599 [2024-12-10 03:06:50.880449] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:56.599 [2024-12-10 03:06:50.880785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75981 ] 00:18:56.859 [2024-12-10 03:06:51.046947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.859 [2024-12-10 03:06:51.167572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.429 03:06:51 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.429 03:06:51 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:57.429 03:06:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:57.429 03:06:51 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:57.429 03:06:51 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:57.429 03:06:51 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:57.429 03:06:51 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:57.429 03:06:51 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:57.690 03:06:52 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:57.690 03:06:52 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:57.690 03:06:52 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:57.690 03:06:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:57.690 03:06:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:57.690 03:06:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:57.690 03:06:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:57.690 03:06:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:57.951 03:06:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:57.951 { 00:18:57.951 "name": "nvme0n1", 00:18:57.951 "aliases": [ 00:18:57.951 "5e0a9606-f3c3-4aed-8c3c-0cbb3612f3fa" 00:18:57.951 ], 00:18:57.951 "product_name": "NVMe disk", 00:18:57.951 "block_size": 4096, 00:18:57.951 "num_blocks": 1310720, 00:18:57.951 "uuid": "5e0a9606-f3c3-4aed-8c3c-0cbb3612f3fa", 00:18:57.951 "numa_id": -1, 00:18:57.951 "assigned_rate_limits": { 00:18:57.951 "rw_ios_per_sec": 0, 00:18:57.951 "rw_mbytes_per_sec": 0, 00:18:57.951 "r_mbytes_per_sec": 0, 00:18:57.951 "w_mbytes_per_sec": 0 00:18:57.951 }, 00:18:57.951 "claimed": true, 00:18:57.951 "claim_type": "read_many_write_one", 00:18:57.951 "zoned": false, 00:18:57.951 "supported_io_types": { 00:18:57.951 "read": true, 00:18:57.951 "write": true, 00:18:57.951 "unmap": true, 00:18:57.951 "flush": true, 00:18:57.951 "reset": true, 00:18:57.951 "nvme_admin": true, 00:18:57.951 "nvme_io": true, 00:18:57.951 "nvme_io_md": false, 00:18:57.951 "write_zeroes": true, 00:18:57.951 "zcopy": false, 00:18:57.951 "get_zone_info": false, 00:18:57.951 "zone_management": false, 00:18:57.951 "zone_append": false, 00:18:57.951 "compare": true, 00:18:57.951 "compare_and_write": false, 00:18:57.951 "abort": true, 00:18:57.951 "seek_hole": false, 00:18:57.951 "seek_data": false, 00:18:57.951 "copy": true, 00:18:57.951 "nvme_iov_md": false 00:18:57.951 }, 00:18:57.951 "driver_specific": { 00:18:57.951 "nvme": [ 00:18:57.951 { 00:18:57.951 "pci_address": "0000:00:11.0", 00:18:57.951 "trid": { 00:18:57.951 "trtype": "PCIe", 00:18:57.951 "traddr": "0000:00:11.0" 00:18:57.951 }, 00:18:57.951 "ctrlr_data": { 00:18:57.951 "cntlid": 0, 00:18:57.951 "vendor_id": "0x1b36", 00:18:57.951 "model_number": "QEMU NVMe Ctrl", 00:18:57.951 "serial_number": "12341", 00:18:57.951 "firmware_revision": "8.0.0", 00:18:57.951 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:57.951 "oacs": { 00:18:57.951 "security": 0, 00:18:57.951 "format": 1, 00:18:57.951 "firmware": 0, 00:18:57.951 "ns_manage": 1 00:18:57.951 }, 00:18:57.951 "multi_ctrlr": false, 00:18:57.951 "ana_reporting": false 00:18:57.951 }, 00:18:57.951 "vs": { 00:18:57.951 "nvme_version": "1.4" 00:18:57.951 }, 00:18:57.951 "ns_data": { 00:18:57.951 "id": 1, 00:18:57.951 "can_share": false 00:18:57.951 } 00:18:57.951 } 00:18:57.951 ], 00:18:57.951 "mp_policy": "active_passive" 00:18:57.951 } 00:18:57.951 } 00:18:57.951 ]' 00:18:57.951 03:06:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:57.951 03:06:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:57.951 03:06:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:58.214 03:06:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:58.214 03:06:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:58.214 03:06:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:58.214 03:06:52 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:58.214 03:06:52 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:58.214 03:06:52 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:58.214 03:06:52 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:58.214 03:06:52 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:58.214 03:06:52 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=ffc353cd-de3b-46c2-9b02-c3b2c14000b5 00:18:58.214 03:06:52 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:58.214 03:06:52 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ffc353cd-de3b-46c2-9b02-c3b2c14000b5 00:18:58.479 03:06:52 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:58.740 03:06:53 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=15b6c5d8-87cd-4b3c-b9c5-01d85814f18f 00:18:58.740 03:06:53 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 15b6c5d8-87cd-4b3c-b9c5-01d85814f18f 00:18:59.001 03:06:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=1c07b054-1fca-4836-b901-5c61f74acd71 00:18:59.001 03:06:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1c07b054-1fca-4836-b901-5c61f74acd71 00:18:59.001 03:06:53 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:59.001 03:06:53 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:59.001 03:06:53 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=1c07b054-1fca-4836-b901-5c61f74acd71 00:18:59.001 03:06:53 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:59.001 03:06:53 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 1c07b054-1fca-4836-b901-5c61f74acd71 00:18:59.001 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=1c07b054-1fca-4836-b901-5c61f74acd71 00:18:59.001 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:59.001 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:59.001 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:59.001 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c07b054-1fca-4836-b901-5c61f74acd71 00:18:59.263 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:59.263 { 00:18:59.263 "name": "1c07b054-1fca-4836-b901-5c61f74acd71", 00:18:59.263 "aliases": [ 00:18:59.263 "lvs/nvme0n1p0" 00:18:59.263 ], 00:18:59.263 "product_name": "Logical Volume", 00:18:59.263 "block_size": 4096, 00:18:59.263 "num_blocks": 26476544, 00:18:59.263 "uuid": "1c07b054-1fca-4836-b901-5c61f74acd71", 00:18:59.263 "assigned_rate_limits": { 00:18:59.263 "rw_ios_per_sec": 0, 00:18:59.263 "rw_mbytes_per_sec": 0, 00:18:59.263 "r_mbytes_per_sec": 0, 00:18:59.263 "w_mbytes_per_sec": 0 00:18:59.263 }, 00:18:59.263 "claimed": false, 00:18:59.263 "zoned": false, 00:18:59.263 "supported_io_types": { 00:18:59.263 "read": true, 00:18:59.263 "write": true, 00:18:59.263 "unmap": true, 00:18:59.263 "flush": false, 00:18:59.263 "reset": true, 00:18:59.263 "nvme_admin": false, 00:18:59.263 "nvme_io": false, 00:18:59.263 "nvme_io_md": false, 00:18:59.263 "write_zeroes": true, 00:18:59.263 "zcopy": false, 00:18:59.263 "get_zone_info": false, 00:18:59.263 "zone_management": false, 00:18:59.263 "zone_append": false, 00:18:59.263 "compare": false, 00:18:59.263 "compare_and_write": false, 00:18:59.263 "abort": false, 00:18:59.263 "seek_hole": true, 00:18:59.263 "seek_data": true, 00:18:59.263 "copy": false, 00:18:59.263 "nvme_iov_md": false 00:18:59.263 }, 00:18:59.263 "driver_specific": { 00:18:59.263 "lvol": { 00:18:59.263 "lvol_store_uuid": "15b6c5d8-87cd-4b3c-b9c5-01d85814f18f", 00:18:59.263 "base_bdev": "nvme0n1", 00:18:59.263 "thin_provision": true, 00:18:59.263 "num_allocated_clusters": 0, 00:18:59.263 "snapshot": false, 00:18:59.263 "clone": false, 00:18:59.263 "esnap_clone": false 00:18:59.263 } 00:18:59.263 } 00:18:59.263 } 00:18:59.263 ]' 00:18:59.263 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:59.263 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:59.263 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:59.263 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:59.263 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:59.263 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:59.263 03:06:53 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:59.263 03:06:53 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:59.263 03:06:53 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:59.529 03:06:53 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:59.529 03:06:53 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:59.529 03:06:53 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 1c07b054-1fca-4836-b901-5c61f74acd71 00:18:59.530 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=1c07b054-1fca-4836-b901-5c61f74acd71 00:18:59.530 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:59.530 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:59.530 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:59.530 03:06:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c07b054-1fca-4836-b901-5c61f74acd71 00:18:59.788 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:59.788 { 00:18:59.788 "name": "1c07b054-1fca-4836-b901-5c61f74acd71", 00:18:59.788 "aliases": [ 00:18:59.788 "lvs/nvme0n1p0" 00:18:59.788 ], 00:18:59.788 "product_name": "Logical Volume", 00:18:59.788 "block_size": 4096, 00:18:59.788 "num_blocks": 26476544, 00:18:59.788 "uuid": "1c07b054-1fca-4836-b901-5c61f74acd71", 00:18:59.788 "assigned_rate_limits": { 00:18:59.788 "rw_ios_per_sec": 0, 00:18:59.788 "rw_mbytes_per_sec": 0, 00:18:59.788 "r_mbytes_per_sec": 0, 00:18:59.788 "w_mbytes_per_sec": 0 00:18:59.788 }, 00:18:59.788 "claimed": false, 00:18:59.788 "zoned": false, 00:18:59.788 "supported_io_types": { 00:18:59.788 "read": true, 00:18:59.788 "write": true, 00:18:59.788 "unmap": true, 00:18:59.788 "flush": false, 00:18:59.788 "reset": true, 00:18:59.788 "nvme_admin": false, 00:18:59.788 "nvme_io": false, 00:18:59.788 "nvme_io_md": false, 00:18:59.788 "write_zeroes": true, 00:18:59.788 "zcopy": false, 00:18:59.788 "get_zone_info": false, 00:18:59.788 "zone_management": false, 00:18:59.788 "zone_append": false, 00:18:59.788 "compare": false, 00:18:59.788 "compare_and_write": false, 00:18:59.788 "abort": false, 00:18:59.788 "seek_hole": true, 00:18:59.788 "seek_data": true, 00:18:59.788 "copy": false, 00:18:59.788 "nvme_iov_md": false 00:18:59.788 }, 00:18:59.788 "driver_specific": { 00:18:59.788 "lvol": { 00:18:59.788 "lvol_store_uuid": "15b6c5d8-87cd-4b3c-b9c5-01d85814f18f", 00:18:59.788 "base_bdev": "nvme0n1", 00:18:59.788 "thin_provision": true, 00:18:59.788 "num_allocated_clusters": 0, 00:18:59.788 "snapshot": false, 00:18:59.788 "clone": false, 00:18:59.788 "esnap_clone": false 00:18:59.788 } 00:18:59.788 } 00:18:59.788 } 00:18:59.789 ]' 00:18:59.789 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:59.789 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:59.789 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:59.789 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:59.789 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:59.789 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:59.789 03:06:54 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:59.789 03:06:54 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:00.049 03:06:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:00.049 03:06:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 1c07b054-1fca-4836-b901-5c61f74acd71 00:19:00.049 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=1c07b054-1fca-4836-b901-5c61f74acd71 00:19:00.049 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:00.049 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:00.049 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:00.049 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c07b054-1fca-4836-b901-5c61f74acd71 00:19:00.311 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:00.311 { 00:19:00.311 "name": "1c07b054-1fca-4836-b901-5c61f74acd71", 00:19:00.311 "aliases": [ 00:19:00.311 "lvs/nvme0n1p0" 00:19:00.311 ], 00:19:00.311 "product_name": "Logical Volume", 00:19:00.311 "block_size": 4096, 00:19:00.311 "num_blocks": 26476544, 00:19:00.311 "uuid": "1c07b054-1fca-4836-b901-5c61f74acd71", 00:19:00.311 "assigned_rate_limits": { 00:19:00.311 "rw_ios_per_sec": 0, 00:19:00.311 "rw_mbytes_per_sec": 0, 00:19:00.311 "r_mbytes_per_sec": 0, 00:19:00.311 "w_mbytes_per_sec": 0 00:19:00.311 }, 00:19:00.311 "claimed": false, 00:19:00.311 "zoned": false, 00:19:00.311 "supported_io_types": { 00:19:00.311 "read": true, 00:19:00.311 "write": true, 00:19:00.311 "unmap": true, 00:19:00.311 "flush": false, 00:19:00.311 "reset": true, 00:19:00.311 "nvme_admin": false, 00:19:00.311 "nvme_io": false, 00:19:00.311 "nvme_io_md": false, 00:19:00.311 "write_zeroes": true, 00:19:00.311 "zcopy": false, 00:19:00.311 "get_zone_info": false, 00:19:00.311 "zone_management": false, 00:19:00.311 "zone_append": false, 00:19:00.311 "compare": false, 00:19:00.311 "compare_and_write": false, 00:19:00.311 "abort": false, 00:19:00.311 "seek_hole": true, 00:19:00.311 "seek_data": true, 00:19:00.311 "copy": false, 00:19:00.311 "nvme_iov_md": false 00:19:00.311 }, 00:19:00.311 "driver_specific": { 00:19:00.311 "lvol": { 00:19:00.311 "lvol_store_uuid": "15b6c5d8-87cd-4b3c-b9c5-01d85814f18f", 00:19:00.311 "base_bdev": "nvme0n1", 00:19:00.311 "thin_provision": true, 00:19:00.311 "num_allocated_clusters": 0, 00:19:00.311 "snapshot": false, 00:19:00.311 "clone": false, 00:19:00.311 "esnap_clone": false 00:19:00.311 } 00:19:00.311 } 00:19:00.311 } 00:19:00.311 ]' 00:19:00.311 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:00.311 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:00.311 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:00.311 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:00.311 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:00.311 03:06:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:00.311 03:06:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:00.311 03:06:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1c07b054-1fca-4836-b901-5c61f74acd71 -c nvc0n1p0 --l2p_dram_limit 20 00:19:00.571 [2024-12-10 03:06:54.738104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.571 [2024-12-10 03:06:54.738147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:00.571 [2024-12-10 03:06:54.738158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:00.571 [2024-12-10 03:06:54.738166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.571 [2024-12-10 03:06:54.738212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.571 [2024-12-10 03:06:54.738222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:00.571 [2024-12-10 03:06:54.738229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:00.572 [2024-12-10 03:06:54.738236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.572 [2024-12-10 03:06:54.738249] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:00.572 [2024-12-10 03:06:54.738833] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:00.572 [2024-12-10 03:06:54.738849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.572 [2024-12-10 03:06:54.738857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:00.572 [2024-12-10 03:06:54.738864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:19:00.572 [2024-12-10 03:06:54.738871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.572 [2024-12-10 03:06:54.739129] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a46b198f-702e-44c1-b5e9-289446d3bc49 00:19:00.572 [2024-12-10 03:06:54.740193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.572 [2024-12-10 03:06:54.740221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:00.572 [2024-12-10 03:06:54.740234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:00.572 [2024-12-10 03:06:54.740240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.572 [2024-12-10 03:06:54.744862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.572 [2024-12-10 03:06:54.744976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:00.572 [2024-12-10 03:06:54.744992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.586 ms 00:19:00.572 [2024-12-10 03:06:54.745000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.572 [2024-12-10 03:06:54.745068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.572 [2024-12-10 03:06:54.745076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:00.572 [2024-12-10 03:06:54.745086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:00.572 [2024-12-10 03:06:54.745092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.572 [2024-12-10 03:06:54.745124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.572 [2024-12-10 03:06:54.745132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:00.572 [2024-12-10 03:06:54.745139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:00.572 [2024-12-10 03:06:54.745144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.572 [2024-12-10 03:06:54.745162] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:00.572 [2024-12-10 03:06:54.747995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.572 [2024-12-10 03:06:54.748086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:00.572 [2024-12-10 03:06:54.748098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:19:00.572 [2024-12-10 03:06:54.748109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.572 [2024-12-10 03:06:54.748134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.572 [2024-12-10 03:06:54.748141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:00.572 [2024-12-10 03:06:54.748148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:00.572 [2024-12-10 03:06:54.748156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.572 [2024-12-10 03:06:54.748167] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:00.572 [2024-12-10 03:06:54.748277] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:00.572 [2024-12-10 03:06:54.748286] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:00.572 [2024-12-10 03:06:54.748295] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:00.572 [2024-12-10 03:06:54.748303] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:00.572 [2024-12-10 03:06:54.748311] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:00.572 [2024-12-10 03:06:54.748317] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:00.572 [2024-12-10 03:06:54.748324] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:00.572 [2024-12-10 03:06:54.748330] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:00.572 [2024-12-10 03:06:54.748338] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:00.572 [2024-12-10 03:06:54.748345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.572 [2024-12-10 03:06:54.748352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:00.572 [2024-12-10 03:06:54.748357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:19:00.572 [2024-12-10 03:06:54.748364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.572 [2024-12-10 03:06:54.748447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.572 [2024-12-10 03:06:54.748456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:00.572 [2024-12-10 03:06:54.748462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:00.572 [2024-12-10 03:06:54.748471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.572 [2024-12-10 03:06:54.748539] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:00.572 [2024-12-10 03:06:54.748549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:00.572 [2024-12-10 03:06:54.748555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:00.572 [2024-12-10 03:06:54.748562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.572 [2024-12-10 03:06:54.748568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:00.572 [2024-12-10 03:06:54.748574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:00.572 [2024-12-10 03:06:54.748579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:00.572 [2024-12-10 03:06:54.748585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:00.572 [2024-12-10 03:06:54.748590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:00.572 [2024-12-10 03:06:54.748597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:00.572 [2024-12-10 03:06:54.748602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:00.572 [2024-12-10 03:06:54.748613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:00.572 [2024-12-10 03:06:54.748619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:00.572 [2024-12-10 03:06:54.748625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:00.572 [2024-12-10 03:06:54.748631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:00.572 [2024-12-10 03:06:54.748638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.572 [2024-12-10 03:06:54.748643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:00.572 [2024-12-10 03:06:54.748649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:00.572 [2024-12-10 03:06:54.748654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.572 [2024-12-10 03:06:54.748661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:00.572 [2024-12-10 03:06:54.748666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:00.572 [2024-12-10 03:06:54.748672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.572 [2024-12-10 03:06:54.748677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:00.572 [2024-12-10 03:06:54.748683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:00.572 [2024-12-10 03:06:54.748688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.572 [2024-12-10 03:06:54.748696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:00.572 [2024-12-10 03:06:54.748701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:00.572 [2024-12-10 03:06:54.748707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.572 [2024-12-10 03:06:54.748712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:00.572 [2024-12-10 03:06:54.748719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:00.572 [2024-12-10 03:06:54.748724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.572 [2024-12-10 03:06:54.748732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:00.572 [2024-12-10 03:06:54.748737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:00.572 [2024-12-10 03:06:54.748743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:00.572 [2024-12-10 03:06:54.748748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:00.572 [2024-12-10 03:06:54.748754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:00.572 [2024-12-10 03:06:54.748759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:00.572 [2024-12-10 03:06:54.748766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:00.572 [2024-12-10 03:06:54.748771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:00.572 [2024-12-10 03:06:54.748778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.572 [2024-12-10 03:06:54.748782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:00.572 [2024-12-10 03:06:54.748789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:00.572 [2024-12-10 03:06:54.748793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.572 [2024-12-10 03:06:54.748800] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:00.572 [2024-12-10 03:06:54.748805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:00.572 [2024-12-10 03:06:54.748812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:00.572 [2024-12-10 03:06:54.748817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.572 [2024-12-10 03:06:54.748826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:00.573 [2024-12-10 03:06:54.748831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:00.573 [2024-12-10 03:06:54.748838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:00.573 [2024-12-10 03:06:54.748843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:00.573 [2024-12-10 03:06:54.748849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:00.573 [2024-12-10 03:06:54.748854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:00.573 [2024-12-10 03:06:54.748862] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:00.573 [2024-12-10 03:06:54.748868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:00.573 [2024-12-10 03:06:54.748876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:00.573 [2024-12-10 03:06:54.748881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:00.573 [2024-12-10 03:06:54.748890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:00.573 [2024-12-10 03:06:54.748895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:00.573 [2024-12-10 03:06:54.748902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:00.573 [2024-12-10 03:06:54.748908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:00.573 [2024-12-10 03:06:54.748915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:00.573 [2024-12-10 03:06:54.748920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:00.573 [2024-12-10 03:06:54.748929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:00.573 [2024-12-10 03:06:54.748934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:00.573 [2024-12-10 03:06:54.748941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:00.573 [2024-12-10 03:06:54.748946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:00.573 [2024-12-10 03:06:54.748953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:00.573 [2024-12-10 03:06:54.748959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:00.573 [2024-12-10 03:06:54.748965] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:00.573 [2024-12-10 03:06:54.748971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:00.573 [2024-12-10 03:06:54.748980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:00.573 [2024-12-10 03:06:54.748986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:00.573 [2024-12-10 03:06:54.748992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:00.573 [2024-12-10 03:06:54.748998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:00.573 [2024-12-10 03:06:54.749005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.573 [2024-12-10 03:06:54.749010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:00.573 [2024-12-10 03:06:54.749017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:19:00.573 [2024-12-10 03:06:54.749022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.573 [2024-12-10 03:06:54.749059] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:00.573 [2024-12-10 03:06:54.749067] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:04.778 [2024-12-10 03:06:58.820243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.778 [2024-12-10 03:06:58.820332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:04.778 [2024-12-10 03:06:58.820352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4071.161 ms 00:19:04.778 [2024-12-10 03:06:58.820362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.779 [2024-12-10 03:06:58.851643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.779 [2024-12-10 03:06:58.851702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:04.779 [2024-12-10 03:06:58.851719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.013 ms 00:19:04.779 [2024-12-10 03:06:58.851728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.779 [2024-12-10 03:06:58.851874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.779 [2024-12-10 03:06:58.851886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:04.779 [2024-12-10 03:06:58.851912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:04.779 [2024-12-10 03:06:58.851921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.779 [2024-12-10 03:06:58.900281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.779 [2024-12-10 03:06:58.900335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:04.779 [2024-12-10 03:06:58.900352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.323 ms 00:19:04.779 [2024-12-10 03:06:58.900362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.779 [2024-12-10 03:06:58.900431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.779 [2024-12-10 03:06:58.900442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:04.779 [2024-12-10 03:06:58.900453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:04.779 [2024-12-10 03:06:58.900464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.779 [2024-12-10 03:06:58.901064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.779 [2024-12-10 03:06:58.901094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:04.779 [2024-12-10 03:06:58.901107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:19:04.779 [2024-12-10 03:06:58.901116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.779 [2024-12-10 03:06:58.901238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.779 [2024-12-10 03:06:58.901248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:04.779 [2024-12-10 03:06:58.901262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:04.779 [2024-12-10 03:06:58.901270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.779 [2024-12-10 03:06:58.916912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.779 [2024-12-10 03:06:58.916953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:04.779 [2024-12-10 03:06:58.916967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.620 ms 00:19:04.779 [2024-12-10 03:06:58.916983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.779 [2024-12-10 03:06:58.930171] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:04.779 [2024-12-10 03:06:58.937932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.779 [2024-12-10 03:06:58.937981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:04.779 [2024-12-10 03:06:58.937992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.848 ms 00:19:04.779 [2024-12-10 03:06:58.938003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.779 [2024-12-10 03:06:59.040800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.779 [2024-12-10 03:06:59.040874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:04.779 [2024-12-10 03:06:59.040893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.767 ms 00:19:04.779 [2024-12-10 03:06:59.040906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.779 [2024-12-10 03:06:59.041121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.779 [2024-12-10 03:06:59.041138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:04.779 [2024-12-10 03:06:59.041148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:19:04.779 [2024-12-10 03:06:59.041162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.779 [2024-12-10 03:06:59.068007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.779 [2024-12-10 03:06:59.068229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:04.779 [2024-12-10 03:06:59.068252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.783 ms 00:19:04.779 [2024-12-10 03:06:59.068264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.779 [2024-12-10 03:06:59.094140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.779 [2024-12-10 03:06:59.094194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:04.779 [2024-12-10 03:06:59.094207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.829 ms 00:19:04.779 [2024-12-10 03:06:59.094218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.779 [2024-12-10 03:06:59.094885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.779 [2024-12-10 03:06:59.094903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:04.779 [2024-12-10 03:06:59.094914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:19:04.779 [2024-12-10 03:06:59.094924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.041 [2024-12-10 03:06:59.182783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.041 [2024-12-10 03:06:59.182844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:05.041 [2024-12-10 03:06:59.182859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.814 ms 00:19:05.041 [2024-12-10 03:06:59.182870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.041 [2024-12-10 03:06:59.211431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.041 [2024-12-10 03:06:59.211486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:05.041 [2024-12-10 03:06:59.211503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.463 ms 00:19:05.041 [2024-12-10 03:06:59.211514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.041 [2024-12-10 03:06:59.238806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.041 [2024-12-10 03:06:59.238866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:05.041 [2024-12-10 03:06:59.238879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.239 ms 00:19:05.041 [2024-12-10 03:06:59.238889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.041 [2024-12-10 03:06:59.266565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.041 [2024-12-10 03:06:59.266621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:05.041 [2024-12-10 03:06:59.266634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.625 ms 00:19:05.041 [2024-12-10 03:06:59.266644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.041 [2024-12-10 03:06:59.266700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.041 [2024-12-10 03:06:59.266715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:05.041 [2024-12-10 03:06:59.266725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:05.041 [2024-12-10 03:06:59.266737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.041 [2024-12-10 03:06:59.266847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.041 [2024-12-10 03:06:59.266862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:05.041 [2024-12-10 03:06:59.266871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:05.041 [2024-12-10 03:06:59.266881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.041 [2024-12-10 03:06:59.268065] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4529.429 ms, result 0 00:19:05.041 { 00:19:05.041 "name": "ftl0", 00:19:05.041 "uuid": "a46b198f-702e-44c1-b5e9-289446d3bc49" 00:19:05.041 } 00:19:05.041 03:06:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:05.041 03:06:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:05.041 03:06:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:05.302 03:06:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:05.302 [2024-12-10 03:06:59.600230] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:05.302 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:05.302 Zero copy mechanism will not be used. 00:19:05.302 Running I/O for 4 seconds... 00:19:07.632 969.00 IOPS, 64.35 MiB/s [2024-12-10T03:07:02.962Z] 892.50 IOPS, 59.27 MiB/s [2024-12-10T03:07:03.906Z] 1049.00 IOPS, 69.66 MiB/s [2024-12-10T03:07:03.906Z] 1279.25 IOPS, 84.95 MiB/s 00:19:09.518 Latency(us) 00:19:09.518 [2024-12-10T03:07:03.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.518 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:09.518 ftl0 : 4.00 1279.05 84.94 0.00 0.00 821.69 156.75 17442.66 00:19:09.518 [2024-12-10T03:07:03.906Z] =================================================================================================================== 00:19:09.518 [2024-12-10T03:07:03.906Z] Total : 1279.05 84.94 0.00 0.00 821.69 156.75 17442.66 00:19:09.518 [2024-12-10 03:07:03.610572] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:09.518 { 00:19:09.518 "results": [ 00:19:09.518 { 00:19:09.518 "job": "ftl0", 00:19:09.518 "core_mask": "0x1", 00:19:09.518 "workload": "randwrite", 00:19:09.518 "status": "finished", 00:19:09.518 "queue_depth": 1, 00:19:09.518 "io_size": 69632, 00:19:09.518 "runtime": 4.001412, 00:19:09.518 "iops": 1279.048495880954, 00:19:09.518 "mibps": 84.9368141795946, 00:19:09.518 "io_failed": 0, 00:19:09.518 "io_timeout": 0, 00:19:09.518 "avg_latency_us": 821.6866636606848, 00:19:09.518 "min_latency_us": 156.75076923076924, 00:19:09.518 "max_latency_us": 17442.65846153846 00:19:09.518 } 00:19:09.518 ], 00:19:09.518 "core_count": 1 00:19:09.518 } 00:19:09.518 03:07:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:09.518 [2024-12-10 03:07:03.719238] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:09.518 Running I/O for 4 seconds... 00:19:11.412 8056.00 IOPS, 31.47 MiB/s [2024-12-10T03:07:06.733Z] 7020.50 IOPS, 27.42 MiB/s [2024-12-10T03:07:08.111Z] 6721.67 IOPS, 26.26 MiB/s [2024-12-10T03:07:08.111Z] 6508.00 IOPS, 25.42 MiB/s 00:19:13.724 Latency(us) 00:19:13.724 [2024-12-10T03:07:08.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.724 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:13.724 ftl0 : 4.03 6496.83 25.38 0.00 0.00 19639.39 280.42 46782.62 00:19:13.724 [2024-12-10T03:07:08.112Z] =================================================================================================================== 00:19:13.724 [2024-12-10T03:07:08.112Z] Total : 6496.83 25.38 0.00 0.00 19639.39 0.00 46782.62 00:19:13.724 [2024-12-10 03:07:07.753805] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:13.724 { 00:19:13.724 "results": [ 00:19:13.724 { 00:19:13.724 "job": "ftl0", 00:19:13.724 "core_mask": "0x1", 00:19:13.724 "workload": "randwrite", 00:19:13.724 "status": "finished", 00:19:13.724 "queue_depth": 128, 00:19:13.724 "io_size": 4096, 00:19:13.724 "runtime": 4.026579, 00:19:13.724 "iops": 6496.830187610873, 00:19:13.724 "mibps": 25.37824292035497, 00:19:13.724 "io_failed": 0, 00:19:13.724 "io_timeout": 0, 00:19:13.724 "avg_latency_us": 19639.394039990588, 00:19:13.724 "min_latency_us": 280.41846153846154, 00:19:13.724 "max_latency_us": 46782.621538461535 00:19:13.724 } 00:19:13.724 ], 00:19:13.724 "core_count": 1 00:19:13.724 } 00:19:13.724 03:07:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:13.724 [2024-12-10 03:07:07.863250] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:13.724 Running I/O for 4 seconds... 00:19:15.612 5681.00 IOPS, 22.19 MiB/s [2024-12-10T03:07:10.943Z] 5348.50 IOPS, 20.89 MiB/s [2024-12-10T03:07:11.888Z] 5174.33 IOPS, 20.21 MiB/s [2024-12-10T03:07:12.149Z] 5168.50 IOPS, 20.19 MiB/s 00:19:17.762 Latency(us) 00:19:17.762 [2024-12-10T03:07:12.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.762 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:17.762 Verification LBA range: start 0x0 length 0x1400000 00:19:17.762 ftl0 : 4.02 5179.20 20.23 0.00 0.00 24634.30 242.61 36296.86 00:19:17.762 [2024-12-10T03:07:12.150Z] =================================================================================================================== 00:19:17.762 [2024-12-10T03:07:12.150Z] Total : 5179.20 20.23 0.00 0.00 24634.30 0.00 36296.86 00:19:17.762 { 00:19:17.762 "results": [ 00:19:17.762 { 00:19:17.762 "job": "ftl0", 00:19:17.762 "core_mask": "0x1", 00:19:17.762 "workload": "verify", 00:19:17.762 "status": "finished", 00:19:17.762 "verify_range": { 00:19:17.762 "start": 0, 00:19:17.762 "length": 20971520 00:19:17.762 }, 00:19:17.762 "queue_depth": 128, 00:19:17.762 "io_size": 4096, 00:19:17.762 "runtime": 4.016454, 00:19:17.762 "iops": 5179.195379805172, 00:19:17.762 "mibps": 20.231231952363952, 00:19:17.762 "io_failed": 0, 00:19:17.762 "io_timeout": 0, 00:19:17.762 "avg_latency_us": 24634.30475590365, 00:19:17.762 "min_latency_us": 242.60923076923078, 00:19:17.762 "max_latency_us": 36296.86153846154 00:19:17.762 } 00:19:17.762 ], 00:19:17.762 "core_count": 1 00:19:17.762 } 00:19:17.762 [2024-12-10 03:07:11.895434] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:17.762 03:07:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:17.762 [2024-12-10 03:07:12.110835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.762 [2024-12-10 03:07:12.110907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:17.762 [2024-12-10 03:07:12.110921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:17.762 [2024-12-10 03:07:12.110933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.762 [2024-12-10 03:07:12.110956] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:17.762 [2024-12-10 03:07:12.114119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.762 [2024-12-10 03:07:12.114171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:17.762 [2024-12-10 03:07:12.114185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.137 ms 00:19:17.762 [2024-12-10 03:07:12.114194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.762 [2024-12-10 03:07:12.117660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.762 [2024-12-10 03:07:12.117719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:17.762 [2024-12-10 03:07:12.117738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.430 ms 00:19:17.762 [2024-12-10 03:07:12.117746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.024 [2024-12-10 03:07:12.332894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.024 [2024-12-10 03:07:12.332958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:18.024 [2024-12-10 03:07:12.332982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 215.118 ms 00:19:18.024 [2024-12-10 03:07:12.332992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.024 [2024-12-10 03:07:12.339243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.025 [2024-12-10 03:07:12.339296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:18.025 [2024-12-10 03:07:12.339312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.199 ms 00:19:18.025 [2024-12-10 03:07:12.339324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.025 [2024-12-10 03:07:12.366673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.025 [2024-12-10 03:07:12.366731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:18.025 [2024-12-10 03:07:12.366748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.243 ms 00:19:18.025 [2024-12-10 03:07:12.366756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.025 [2024-12-10 03:07:12.385473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.025 [2024-12-10 03:07:12.385538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:18.025 [2024-12-10 03:07:12.385555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.656 ms 00:19:18.025 [2024-12-10 03:07:12.385564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.025 [2024-12-10 03:07:12.385740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.025 [2024-12-10 03:07:12.385753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:18.025 [2024-12-10 03:07:12.385769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:19:18.025 [2024-12-10 03:07:12.385777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.287 [2024-12-10 03:07:12.413105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.287 [2024-12-10 03:07:12.413165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:18.287 [2024-12-10 03:07:12.413180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.305 ms 00:19:18.287 [2024-12-10 03:07:12.413188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.287 [2024-12-10 03:07:12.440150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.287 [2024-12-10 03:07:12.440207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:18.287 [2024-12-10 03:07:12.440222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.902 ms 00:19:18.287 [2024-12-10 03:07:12.440229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.287 [2024-12-10 03:07:12.465838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.288 [2024-12-10 03:07:12.465895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:18.288 [2024-12-10 03:07:12.465910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.551 ms 00:19:18.288 [2024-12-10 03:07:12.465917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.288 [2024-12-10 03:07:12.492110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.288 [2024-12-10 03:07:12.492167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:18.288 [2024-12-10 03:07:12.492185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.072 ms 00:19:18.288 [2024-12-10 03:07:12.492193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.288 [2024-12-10 03:07:12.492246] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:18.288 [2024-12-10 03:07:12.492262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.492995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.493003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:18.288 [2024-12-10 03:07:12.493015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:18.289 [2024-12-10 03:07:12.493223] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:18.289 [2024-12-10 03:07:12.493233] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a46b198f-702e-44c1-b5e9-289446d3bc49 00:19:18.289 [2024-12-10 03:07:12.493243] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:18.289 [2024-12-10 03:07:12.493252] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:18.289 [2024-12-10 03:07:12.493261] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:18.289 [2024-12-10 03:07:12.493272] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:18.289 [2024-12-10 03:07:12.493279] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:18.289 [2024-12-10 03:07:12.493288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:18.289 [2024-12-10 03:07:12.493296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:18.289 [2024-12-10 03:07:12.493306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:18.289 [2024-12-10 03:07:12.493313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:18.289 [2024-12-10 03:07:12.493322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.289 [2024-12-10 03:07:12.493330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:18.289 [2024-12-10 03:07:12.493341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:19:18.289 [2024-12-10 03:07:12.493348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.289 [2024-12-10 03:07:12.507577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.289 [2024-12-10 03:07:12.507629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:18.289 [2024-12-10 03:07:12.507644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.142 ms 00:19:18.289 [2024-12-10 03:07:12.507654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.289 [2024-12-10 03:07:12.508070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.289 [2024-12-10 03:07:12.508097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:18.289 [2024-12-10 03:07:12.508110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:19:18.289 [2024-12-10 03:07:12.508118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.289 [2024-12-10 03:07:12.548001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.289 [2024-12-10 03:07:12.548057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:18.289 [2024-12-10 03:07:12.548074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.289 [2024-12-10 03:07:12.548082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.289 [2024-12-10 03:07:12.548157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.289 [2024-12-10 03:07:12.548167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:18.289 [2024-12-10 03:07:12.548177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.289 [2024-12-10 03:07:12.548186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.289 [2024-12-10 03:07:12.548282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.289 [2024-12-10 03:07:12.548294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:18.289 [2024-12-10 03:07:12.548305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.289 [2024-12-10 03:07:12.548313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.289 [2024-12-10 03:07:12.548331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.289 [2024-12-10 03:07:12.548340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:18.289 [2024-12-10 03:07:12.548350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.289 [2024-12-10 03:07:12.548358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.289 [2024-12-10 03:07:12.634753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.289 [2024-12-10 03:07:12.634817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:18.289 [2024-12-10 03:07:12.634835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.289 [2024-12-10 03:07:12.634844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.552 [2024-12-10 03:07:12.706264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.552 [2024-12-10 03:07:12.706325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:18.552 [2024-12-10 03:07:12.706339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.552 [2024-12-10 03:07:12.706348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.552 [2024-12-10 03:07:12.706496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.552 [2024-12-10 03:07:12.706509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:18.552 [2024-12-10 03:07:12.706521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.552 [2024-12-10 03:07:12.706530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.552 [2024-12-10 03:07:12.706579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.552 [2024-12-10 03:07:12.706589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:18.552 [2024-12-10 03:07:12.706600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.552 [2024-12-10 03:07:12.706608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.552 [2024-12-10 03:07:12.706712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.552 [2024-12-10 03:07:12.706724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:18.552 [2024-12-10 03:07:12.706739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.552 [2024-12-10 03:07:12.706747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.552 [2024-12-10 03:07:12.706783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.552 [2024-12-10 03:07:12.706793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:18.552 [2024-12-10 03:07:12.706803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.552 [2024-12-10 03:07:12.706812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.552 [2024-12-10 03:07:12.706857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.552 [2024-12-10 03:07:12.706885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:18.552 [2024-12-10 03:07:12.706895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.552 [2024-12-10 03:07:12.706912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.552 [2024-12-10 03:07:12.706960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.552 [2024-12-10 03:07:12.706982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:18.552 [2024-12-10 03:07:12.706994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.552 [2024-12-10 03:07:12.707003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.552 [2024-12-10 03:07:12.707148] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 596.271 ms, result 0 00:19:18.552 true 00:19:18.552 03:07:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75981 00:19:18.552 03:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75981 ']' 00:19:18.552 03:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75981 00:19:18.552 03:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:18.552 03:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.552 03:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75981 00:19:18.552 killing process with pid 75981 00:19:18.552 Received shutdown signal, test time was about 4.000000 seconds 00:19:18.552 00:19:18.552 Latency(us) 00:19:18.552 [2024-12-10T03:07:12.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.552 [2024-12-10T03:07:12.940Z] =================================================================================================================== 00:19:18.552 [2024-12-10T03:07:12.940Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:18.552 03:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:18.552 03:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:18.552 03:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75981' 00:19:18.552 03:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75981 00:19:18.552 03:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75981 00:19:19.495 Remove shared memory files 00:19:19.495 03:07:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:19.495 03:07:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:19.495 03:07:13 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:19.495 03:07:13 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:19.495 03:07:13 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:19.495 03:07:13 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:19.495 03:07:13 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:19.495 03:07:13 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:19.495 ************************************ 00:19:19.495 END TEST ftl_bdevperf 00:19:19.495 ************************************ 00:19:19.495 00:19:19.495 real 0m23.149s 00:19:19.495 user 0m25.813s 00:19:19.495 sys 0m0.978s 00:19:19.495 03:07:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.495 03:07:13 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:19.495 03:07:13 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:19.495 03:07:13 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:19.495 03:07:13 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.495 03:07:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:19.495 ************************************ 00:19:19.495 START TEST ftl_trim 00:19:19.495 ************************************ 00:19:19.495 03:07:13 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:19.757 * Looking for test storage... 00:19:19.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:19.757 03:07:13 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:19.757 03:07:13 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:19.757 03:07:13 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:19.757 03:07:13 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:19.757 03:07:13 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:19.757 03:07:14 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:19.757 03:07:14 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:19.757 03:07:14 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:19.757 03:07:14 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:19.757 03:07:14 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:19.757 03:07:14 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:19.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.757 --rc genhtml_branch_coverage=1 00:19:19.757 --rc genhtml_function_coverage=1 00:19:19.757 --rc genhtml_legend=1 00:19:19.757 --rc geninfo_all_blocks=1 00:19:19.757 --rc geninfo_unexecuted_blocks=1 00:19:19.757 00:19:19.757 ' 00:19:19.757 03:07:14 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:19.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.757 --rc genhtml_branch_coverage=1 00:19:19.757 --rc genhtml_function_coverage=1 00:19:19.757 --rc genhtml_legend=1 00:19:19.757 --rc geninfo_all_blocks=1 00:19:19.757 --rc geninfo_unexecuted_blocks=1 00:19:19.757 00:19:19.757 ' 00:19:19.757 03:07:14 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:19.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.757 --rc genhtml_branch_coverage=1 00:19:19.757 --rc genhtml_function_coverage=1 00:19:19.757 --rc genhtml_legend=1 00:19:19.758 --rc geninfo_all_blocks=1 00:19:19.758 --rc geninfo_unexecuted_blocks=1 00:19:19.758 00:19:19.758 ' 00:19:19.758 03:07:14 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:19.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.758 --rc genhtml_branch_coverage=1 00:19:19.758 --rc genhtml_function_coverage=1 00:19:19.758 --rc genhtml_legend=1 00:19:19.758 --rc geninfo_all_blocks=1 00:19:19.758 --rc geninfo_unexecuted_blocks=1 00:19:19.758 00:19:19.758 ' 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76335 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76335 00:19:19.758 03:07:14 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76335 ']' 00:19:19.758 03:07:14 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.758 03:07:14 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.758 03:07:14 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.758 03:07:14 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:19.758 03:07:14 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.758 03:07:14 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:19.758 [2024-12-10 03:07:14.099434] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:19.758 [2024-12-10 03:07:14.099566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76335 ] 00:19:20.019 [2024-12-10 03:07:14.259915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:20.019 [2024-12-10 03:07:14.365896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.019 [2024-12-10 03:07:14.366564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.019 [2024-12-10 03:07:14.366669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.589 03:07:14 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.589 03:07:14 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:20.589 03:07:14 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:20.589 03:07:14 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:20.589 03:07:14 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:20.589 03:07:14 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:20.589 03:07:14 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:20.589 03:07:14 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:20.850 03:07:15 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:20.850 03:07:15 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:20.850 03:07:15 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:20.850 03:07:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:20.850 03:07:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:20.850 03:07:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:20.850 03:07:15 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:20.850 03:07:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:21.112 03:07:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:21.112 { 00:19:21.112 "name": "nvme0n1", 00:19:21.112 "aliases": [ 00:19:21.112 "31e5e667-f5b1-4d0f-85b0-c043dd20441e" 00:19:21.112 ], 00:19:21.112 "product_name": "NVMe disk", 00:19:21.112 "block_size": 4096, 00:19:21.112 "num_blocks": 1310720, 00:19:21.112 "uuid": "31e5e667-f5b1-4d0f-85b0-c043dd20441e", 00:19:21.112 "numa_id": -1, 00:19:21.112 "assigned_rate_limits": { 00:19:21.112 "rw_ios_per_sec": 0, 00:19:21.112 "rw_mbytes_per_sec": 0, 00:19:21.112 "r_mbytes_per_sec": 0, 00:19:21.112 "w_mbytes_per_sec": 0 00:19:21.112 }, 00:19:21.112 "claimed": true, 00:19:21.112 "claim_type": "read_many_write_one", 00:19:21.112 "zoned": false, 00:19:21.112 "supported_io_types": { 00:19:21.112 "read": true, 00:19:21.112 "write": true, 00:19:21.112 "unmap": true, 00:19:21.112 "flush": true, 00:19:21.112 "reset": true, 00:19:21.112 "nvme_admin": true, 00:19:21.112 "nvme_io": true, 00:19:21.112 "nvme_io_md": false, 00:19:21.112 "write_zeroes": true, 00:19:21.112 "zcopy": false, 00:19:21.112 "get_zone_info": false, 00:19:21.112 "zone_management": false, 00:19:21.112 "zone_append": false, 00:19:21.112 "compare": true, 00:19:21.112 "compare_and_write": false, 00:19:21.112 "abort": true, 00:19:21.112 "seek_hole": false, 00:19:21.112 "seek_data": false, 00:19:21.112 "copy": true, 00:19:21.112 "nvme_iov_md": false 00:19:21.112 }, 00:19:21.112 "driver_specific": { 00:19:21.112 "nvme": [ 00:19:21.112 { 00:19:21.112 "pci_address": "0000:00:11.0", 00:19:21.112 "trid": { 00:19:21.112 "trtype": "PCIe", 00:19:21.112 "traddr": "0000:00:11.0" 00:19:21.112 }, 00:19:21.112 "ctrlr_data": { 00:19:21.112 "cntlid": 0, 00:19:21.112 "vendor_id": "0x1b36", 00:19:21.112 "model_number": "QEMU NVMe Ctrl", 00:19:21.112 "serial_number": "12341", 00:19:21.112 "firmware_revision": "8.0.0", 00:19:21.112 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:21.112 "oacs": { 00:19:21.112 "security": 0, 00:19:21.112 "format": 1, 00:19:21.112 "firmware": 0, 00:19:21.112 "ns_manage": 1 00:19:21.112 }, 00:19:21.112 "multi_ctrlr": false, 00:19:21.112 "ana_reporting": false 00:19:21.112 }, 00:19:21.112 "vs": { 00:19:21.112 "nvme_version": "1.4" 00:19:21.112 }, 00:19:21.112 "ns_data": { 00:19:21.112 "id": 1, 00:19:21.112 "can_share": false 00:19:21.112 } 00:19:21.112 } 00:19:21.112 ], 00:19:21.112 "mp_policy": "active_passive" 00:19:21.112 } 00:19:21.112 } 00:19:21.112 ]' 00:19:21.112 03:07:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:21.112 03:07:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:21.112 03:07:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:21.112 03:07:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:21.112 03:07:15 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:21.112 03:07:15 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:21.112 03:07:15 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:21.112 03:07:15 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:21.112 03:07:15 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:21.112 03:07:15 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:21.112 03:07:15 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:21.373 03:07:15 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=15b6c5d8-87cd-4b3c-b9c5-01d85814f18f 00:19:21.373 03:07:15 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:21.373 03:07:15 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 15b6c5d8-87cd-4b3c-b9c5-01d85814f18f 00:19:21.703 03:07:15 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:21.968 03:07:16 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=eaefe0e7-a23c-4b59-95ad-fd424fdcb5cc 00:19:21.968 03:07:16 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u eaefe0e7-a23c-4b59-95ad-fd424fdcb5cc 00:19:21.968 03:07:16 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=c4405cf8-08ac-47bd-8f4a-166328c64538 00:19:21.968 03:07:16 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c4405cf8-08ac-47bd-8f4a-166328c64538 00:19:21.968 03:07:16 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:21.968 03:07:16 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:21.968 03:07:16 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=c4405cf8-08ac-47bd-8f4a-166328c64538 00:19:21.968 03:07:16 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:21.969 03:07:16 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size c4405cf8-08ac-47bd-8f4a-166328c64538 00:19:21.969 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c4405cf8-08ac-47bd-8f4a-166328c64538 00:19:21.969 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:21.969 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:21.969 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:21.969 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c4405cf8-08ac-47bd-8f4a-166328c64538 00:19:22.230 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:22.230 { 00:19:22.230 "name": "c4405cf8-08ac-47bd-8f4a-166328c64538", 00:19:22.230 "aliases": [ 00:19:22.230 "lvs/nvme0n1p0" 00:19:22.230 ], 00:19:22.230 "product_name": "Logical Volume", 00:19:22.230 "block_size": 4096, 00:19:22.230 "num_blocks": 26476544, 00:19:22.230 "uuid": "c4405cf8-08ac-47bd-8f4a-166328c64538", 00:19:22.230 "assigned_rate_limits": { 00:19:22.230 "rw_ios_per_sec": 0, 00:19:22.230 "rw_mbytes_per_sec": 0, 00:19:22.230 "r_mbytes_per_sec": 0, 00:19:22.230 "w_mbytes_per_sec": 0 00:19:22.230 }, 00:19:22.230 "claimed": false, 00:19:22.230 "zoned": false, 00:19:22.230 "supported_io_types": { 00:19:22.230 "read": true, 00:19:22.230 "write": true, 00:19:22.230 "unmap": true, 00:19:22.230 "flush": false, 00:19:22.230 "reset": true, 00:19:22.230 "nvme_admin": false, 00:19:22.230 "nvme_io": false, 00:19:22.230 "nvme_io_md": false, 00:19:22.230 "write_zeroes": true, 00:19:22.230 "zcopy": false, 00:19:22.230 "get_zone_info": false, 00:19:22.230 "zone_management": false, 00:19:22.230 "zone_append": false, 00:19:22.230 "compare": false, 00:19:22.230 "compare_and_write": false, 00:19:22.230 "abort": false, 00:19:22.230 "seek_hole": true, 00:19:22.230 "seek_data": true, 00:19:22.230 "copy": false, 00:19:22.230 "nvme_iov_md": false 00:19:22.230 }, 00:19:22.230 "driver_specific": { 00:19:22.230 "lvol": { 00:19:22.230 "lvol_store_uuid": "eaefe0e7-a23c-4b59-95ad-fd424fdcb5cc", 00:19:22.230 "base_bdev": "nvme0n1", 00:19:22.230 "thin_provision": true, 00:19:22.230 "num_allocated_clusters": 0, 00:19:22.230 "snapshot": false, 00:19:22.230 "clone": false, 00:19:22.230 "esnap_clone": false 00:19:22.230 } 00:19:22.230 } 00:19:22.230 } 00:19:22.230 ]' 00:19:22.230 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:22.230 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:22.230 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:22.230 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:22.230 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:22.230 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:22.230 03:07:16 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:22.230 03:07:16 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:22.230 03:07:16 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:22.491 03:07:16 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:22.491 03:07:16 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:22.491 03:07:16 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size c4405cf8-08ac-47bd-8f4a-166328c64538 00:19:22.491 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c4405cf8-08ac-47bd-8f4a-166328c64538 00:19:22.491 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:22.491 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:22.491 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:22.491 03:07:16 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c4405cf8-08ac-47bd-8f4a-166328c64538 00:19:22.753 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:22.753 { 00:19:22.753 "name": "c4405cf8-08ac-47bd-8f4a-166328c64538", 00:19:22.753 "aliases": [ 00:19:22.753 "lvs/nvme0n1p0" 00:19:22.753 ], 00:19:22.753 "product_name": "Logical Volume", 00:19:22.753 "block_size": 4096, 00:19:22.753 "num_blocks": 26476544, 00:19:22.753 "uuid": "c4405cf8-08ac-47bd-8f4a-166328c64538", 00:19:22.753 "assigned_rate_limits": { 00:19:22.753 "rw_ios_per_sec": 0, 00:19:22.753 "rw_mbytes_per_sec": 0, 00:19:22.753 "r_mbytes_per_sec": 0, 00:19:22.753 "w_mbytes_per_sec": 0 00:19:22.753 }, 00:19:22.753 "claimed": false, 00:19:22.753 "zoned": false, 00:19:22.753 "supported_io_types": { 00:19:22.753 "read": true, 00:19:22.753 "write": true, 00:19:22.753 "unmap": true, 00:19:22.753 "flush": false, 00:19:22.753 "reset": true, 00:19:22.753 "nvme_admin": false, 00:19:22.753 "nvme_io": false, 00:19:22.753 "nvme_io_md": false, 00:19:22.753 "write_zeroes": true, 00:19:22.753 "zcopy": false, 00:19:22.753 "get_zone_info": false, 00:19:22.753 "zone_management": false, 00:19:22.753 "zone_append": false, 00:19:22.753 "compare": false, 00:19:22.753 "compare_and_write": false, 00:19:22.753 "abort": false, 00:19:22.753 "seek_hole": true, 00:19:22.753 "seek_data": true, 00:19:22.753 "copy": false, 00:19:22.753 "nvme_iov_md": false 00:19:22.753 }, 00:19:22.753 "driver_specific": { 00:19:22.753 "lvol": { 00:19:22.753 "lvol_store_uuid": "eaefe0e7-a23c-4b59-95ad-fd424fdcb5cc", 00:19:22.753 "base_bdev": "nvme0n1", 00:19:22.753 "thin_provision": true, 00:19:22.753 "num_allocated_clusters": 0, 00:19:22.753 "snapshot": false, 00:19:22.753 "clone": false, 00:19:22.753 "esnap_clone": false 00:19:22.753 } 00:19:22.753 } 00:19:22.753 } 00:19:22.753 ]' 00:19:22.753 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:22.753 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:22.753 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:22.753 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:22.753 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:22.753 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:22.753 03:07:17 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:22.753 03:07:17 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:23.015 03:07:17 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:23.015 03:07:17 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:23.015 03:07:17 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size c4405cf8-08ac-47bd-8f4a-166328c64538 00:19:23.015 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c4405cf8-08ac-47bd-8f4a-166328c64538 00:19:23.015 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:23.015 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:23.015 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:23.015 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c4405cf8-08ac-47bd-8f4a-166328c64538 00:19:23.277 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:23.277 { 00:19:23.277 "name": "c4405cf8-08ac-47bd-8f4a-166328c64538", 00:19:23.277 "aliases": [ 00:19:23.277 "lvs/nvme0n1p0" 00:19:23.277 ], 00:19:23.277 "product_name": "Logical Volume", 00:19:23.277 "block_size": 4096, 00:19:23.277 "num_blocks": 26476544, 00:19:23.277 "uuid": "c4405cf8-08ac-47bd-8f4a-166328c64538", 00:19:23.277 "assigned_rate_limits": { 00:19:23.277 "rw_ios_per_sec": 0, 00:19:23.277 "rw_mbytes_per_sec": 0, 00:19:23.277 "r_mbytes_per_sec": 0, 00:19:23.277 "w_mbytes_per_sec": 0 00:19:23.277 }, 00:19:23.277 "claimed": false, 00:19:23.277 "zoned": false, 00:19:23.277 "supported_io_types": { 00:19:23.277 "read": true, 00:19:23.277 "write": true, 00:19:23.277 "unmap": true, 00:19:23.277 "flush": false, 00:19:23.277 "reset": true, 00:19:23.277 "nvme_admin": false, 00:19:23.277 "nvme_io": false, 00:19:23.277 "nvme_io_md": false, 00:19:23.277 "write_zeroes": true, 00:19:23.277 "zcopy": false, 00:19:23.277 "get_zone_info": false, 00:19:23.277 "zone_management": false, 00:19:23.277 "zone_append": false, 00:19:23.277 "compare": false, 00:19:23.277 "compare_and_write": false, 00:19:23.277 "abort": false, 00:19:23.277 "seek_hole": true, 00:19:23.277 "seek_data": true, 00:19:23.277 "copy": false, 00:19:23.277 "nvme_iov_md": false 00:19:23.277 }, 00:19:23.277 "driver_specific": { 00:19:23.277 "lvol": { 00:19:23.277 "lvol_store_uuid": "eaefe0e7-a23c-4b59-95ad-fd424fdcb5cc", 00:19:23.277 "base_bdev": "nvme0n1", 00:19:23.277 "thin_provision": true, 00:19:23.277 "num_allocated_clusters": 0, 00:19:23.277 "snapshot": false, 00:19:23.277 "clone": false, 00:19:23.277 "esnap_clone": false 00:19:23.277 } 00:19:23.277 } 00:19:23.277 } 00:19:23.277 ]' 00:19:23.277 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:23.277 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:23.277 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:23.277 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:23.277 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:23.277 03:07:17 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:23.277 03:07:17 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:23.277 03:07:17 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c4405cf8-08ac-47bd-8f4a-166328c64538 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:23.540 [2024-12-10 03:07:17.772856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.540 [2024-12-10 03:07:17.773259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:23.540 [2024-12-10 03:07:17.773286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:23.540 [2024-12-10 03:07:17.773293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.540 [2024-12-10 03:07:17.775512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.540 [2024-12-10 03:07:17.775529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:23.540 [2024-12-10 03:07:17.775538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 00:19:23.540 [2024-12-10 03:07:17.775544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.540 [2024-12-10 03:07:17.775617] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:23.540 [2024-12-10 03:07:17.776192] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:23.540 [2024-12-10 03:07:17.776206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.540 [2024-12-10 03:07:17.776212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:23.540 [2024-12-10 03:07:17.776220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:19:23.540 [2024-12-10 03:07:17.776226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.540 [2024-12-10 03:07:17.776316] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fa9db8c7-13a0-427a-b0c5-b6bfa89fe0fd 00:19:23.540 [2024-12-10 03:07:17.777687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.540 [2024-12-10 03:07:17.777821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:23.540 [2024-12-10 03:07:17.777927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:23.540 [2024-12-10 03:07:17.778024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.540 [2024-12-10 03:07:17.782772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.540 [2024-12-10 03:07:17.782915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:23.540 [2024-12-10 03:07:17.783019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.653 ms 00:19:23.540 [2024-12-10 03:07:17.783115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.540 [2024-12-10 03:07:17.783263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.540 [2024-12-10 03:07:17.783326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:23.540 [2024-12-10 03:07:17.783446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:23.540 [2024-12-10 03:07:17.783548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.540 [2024-12-10 03:07:17.783625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.540 [2024-12-10 03:07:17.783687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:23.540 [2024-12-10 03:07:17.783778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:23.540 [2024-12-10 03:07:17.783833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.540 [2024-12-10 03:07:17.784041] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:23.540 [2024-12-10 03:07:17.786908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.540 [2024-12-10 03:07:17.787048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:23.540 [2024-12-10 03:07:17.787165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.871 ms 00:19:23.540 [2024-12-10 03:07:17.787216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.540 [2024-12-10 03:07:17.787324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.540 [2024-12-10 03:07:17.787461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:23.540 [2024-12-10 03:07:17.787515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:23.540 [2024-12-10 03:07:17.787575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.540 [2024-12-10 03:07:17.787689] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:23.540 [2024-12-10 03:07:17.787890] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:23.540 [2024-12-10 03:07:17.788010] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:23.540 [2024-12-10 03:07:17.788113] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:23.540 [2024-12-10 03:07:17.788179] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:23.540 [2024-12-10 03:07:17.788291] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:23.540 [2024-12-10 03:07:17.788407] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:23.540 [2024-12-10 03:07:17.788457] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:23.540 [2024-12-10 03:07:17.788544] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:23.540 [2024-12-10 03:07:17.788634] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:23.540 [2024-12-10 03:07:17.788742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.540 [2024-12-10 03:07:17.788787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:23.540 [2024-12-10 03:07:17.788879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:19:23.540 [2024-12-10 03:07:17.788930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.540 [2024-12-10 03:07:17.789114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.540 [2024-12-10 03:07:17.789205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:23.541 [2024-12-10 03:07:17.789298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:23.541 [2024-12-10 03:07:17.789347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.541 [2024-12-10 03:07:17.789512] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:23.541 [2024-12-10 03:07:17.789638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:23.541 [2024-12-10 03:07:17.789749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:23.541 [2024-12-10 03:07:17.789837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.541 [2024-12-10 03:07:17.789888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:23.541 [2024-12-10 03:07:17.789969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:23.541 [2024-12-10 03:07:17.790057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:23.541 [2024-12-10 03:07:17.790156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:23.541 [2024-12-10 03:07:17.790205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:23.541 [2024-12-10 03:07:17.790328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:23.541 [2024-12-10 03:07:17.790390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:23.541 [2024-12-10 03:07:17.790448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:23.541 [2024-12-10 03:07:17.790539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:23.541 [2024-12-10 03:07:17.790636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:23.541 [2024-12-10 03:07:17.790685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:23.541 [2024-12-10 03:07:17.790788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.541 [2024-12-10 03:07:17.790835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:23.541 [2024-12-10 03:07:17.790934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:23.541 [2024-12-10 03:07:17.790987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.541 [2024-12-10 03:07:17.791085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:23.541 [2024-12-10 03:07:17.791133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:23.541 [2024-12-10 03:07:17.791269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.541 [2024-12-10 03:07:17.791322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:23.541 [2024-12-10 03:07:17.791429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:23.541 [2024-12-10 03:07:17.791479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.541 [2024-12-10 03:07:17.791602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:23.541 [2024-12-10 03:07:17.791654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:23.541 [2024-12-10 03:07:17.791767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.541 [2024-12-10 03:07:17.791811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:23.541 [2024-12-10 03:07:17.791920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:23.541 [2024-12-10 03:07:17.791968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.541 [2024-12-10 03:07:17.792077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:23.541 [2024-12-10 03:07:17.792130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:23.541 [2024-12-10 03:07:17.792219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:23.541 [2024-12-10 03:07:17.792276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:23.541 [2024-12-10 03:07:17.792345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:23.541 [2024-12-10 03:07:17.792448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:23.541 [2024-12-10 03:07:17.792493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:23.541 [2024-12-10 03:07:17.792605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:23.541 [2024-12-10 03:07:17.792641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.541 [2024-12-10 03:07:17.792671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:23.541 [2024-12-10 03:07:17.792746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:23.541 [2024-12-10 03:07:17.792799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.541 [2024-12-10 03:07:17.792826] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:23.541 [2024-12-10 03:07:17.792858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:23.541 [2024-12-10 03:07:17.792924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:23.541 [2024-12-10 03:07:17.792972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.541 [2024-12-10 03:07:17.793001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:23.541 [2024-12-10 03:07:17.793031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:23.541 [2024-12-10 03:07:17.793108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:23.541 [2024-12-10 03:07:17.793154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:23.541 [2024-12-10 03:07:17.793185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:23.541 [2024-12-10 03:07:17.793216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:23.541 [2024-12-10 03:07:17.793246] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:23.541 [2024-12-10 03:07:17.793332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:23.541 [2024-12-10 03:07:17.793397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:23.541 [2024-12-10 03:07:17.793433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:23.541 [2024-12-10 03:07:17.793460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:23.541 [2024-12-10 03:07:17.793490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:23.541 [2024-12-10 03:07:17.793524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:23.541 [2024-12-10 03:07:17.793558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:23.541 [2024-12-10 03:07:17.793599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:23.541 [2024-12-10 03:07:17.793700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:23.541 [2024-12-10 03:07:17.793746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:23.541 [2024-12-10 03:07:17.793780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:23.541 [2024-12-10 03:07:17.793809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:23.541 [2024-12-10 03:07:17.793841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:23.541 [2024-12-10 03:07:17.793870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:23.541 [2024-12-10 03:07:17.793962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:23.541 [2024-12-10 03:07:17.794009] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:23.541 [2024-12-10 03:07:17.794047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:23.541 [2024-12-10 03:07:17.794080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:23.541 [2024-12-10 03:07:17.794111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:23.541 [2024-12-10 03:07:17.794140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:23.541 [2024-12-10 03:07:17.794222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:23.541 [2024-12-10 03:07:17.794271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.541 [2024-12-10 03:07:17.794301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:23.541 [2024-12-10 03:07:17.794327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.814 ms 00:19:23.541 [2024-12-10 03:07:17.794358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.541 [2024-12-10 03:07:17.794465] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:23.541 [2024-12-10 03:07:17.794577] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:26.086 [2024-12-10 03:07:20.258630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.086 [2024-12-10 03:07:20.259123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:26.086 [2024-12-10 03:07:20.259188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2464.153 ms 00:19:26.086 [2024-12-10 03:07:20.259237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.086 [2024-12-10 03:07:20.284462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.086 [2024-12-10 03:07:20.284708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:26.086 [2024-12-10 03:07:20.284780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.929 ms 00:19:26.086 [2024-12-10 03:07:20.284826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.086 [2024-12-10 03:07:20.284996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.086 [2024-12-10 03:07:20.285127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:26.086 [2024-12-10 03:07:20.285222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:26.086 [2024-12-10 03:07:20.285277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.086 [2024-12-10 03:07:20.340409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.086 [2024-12-10 03:07:20.340530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:26.086 [2024-12-10 03:07:20.340586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.922 ms 00:19:26.086 [2024-12-10 03:07:20.340630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.086 [2024-12-10 03:07:20.340729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.086 [2024-12-10 03:07:20.340884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:26.086 [2024-12-10 03:07:20.340947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:26.086 [2024-12-10 03:07:20.340990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.086 [2024-12-10 03:07:20.341334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.086 [2024-12-10 03:07:20.341508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:26.086 [2024-12-10 03:07:20.341584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:19:26.086 [2024-12-10 03:07:20.341642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.086 [2024-12-10 03:07:20.341872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.086 [2024-12-10 03:07:20.341929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:26.086 [2024-12-10 03:07:20.342016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:26.086 [2024-12-10 03:07:20.342126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.086 [2024-12-10 03:07:20.356166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.086 [2024-12-10 03:07:20.356328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:26.086 [2024-12-10 03:07:20.356387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.959 ms 00:19:26.086 [2024-12-10 03:07:20.356434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.086 [2024-12-10 03:07:20.367714] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:26.086 [2024-12-10 03:07:20.381474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.086 [2024-12-10 03:07:20.381641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:26.086 [2024-12-10 03:07:20.381812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.920 ms 00:19:26.086 [2024-12-10 03:07:20.381874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.086 [2024-12-10 03:07:20.443826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.086 [2024-12-10 03:07:20.444053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:26.086 [2024-12-10 03:07:20.444175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.787 ms 00:19:26.086 [2024-12-10 03:07:20.444231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.086 [2024-12-10 03:07:20.444491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.086 [2024-12-10 03:07:20.444620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:26.086 [2024-12-10 03:07:20.444719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:19:26.086 [2024-12-10 03:07:20.444775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.086 [2024-12-10 03:07:20.467662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.086 [2024-12-10 03:07:20.467826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:26.086 [2024-12-10 03:07:20.467948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.776 ms 00:19:26.346 [2024-12-10 03:07:20.468029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.346 [2024-12-10 03:07:20.490446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.346 [2024-12-10 03:07:20.490586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:26.346 [2024-12-10 03:07:20.490629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.256 ms 00:19:26.346 [2024-12-10 03:07:20.490676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.346 [2024-12-10 03:07:20.491279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.346 [2024-12-10 03:07:20.491421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:26.346 [2024-12-10 03:07:20.491498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:19:26.346 [2024-12-10 03:07:20.491605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.346 [2024-12-10 03:07:20.565989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.346 [2024-12-10 03:07:20.566190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:26.346 [2024-12-10 03:07:20.566311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.251 ms 00:19:26.346 [2024-12-10 03:07:20.566387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.346 [2024-12-10 03:07:20.589917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.346 [2024-12-10 03:07:20.590087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:26.346 [2024-12-10 03:07:20.590198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.378 ms 00:19:26.346 [2024-12-10 03:07:20.590258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.346 [2024-12-10 03:07:20.612646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.346 [2024-12-10 03:07:20.612804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:26.346 [2024-12-10 03:07:20.612854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.292 ms 00:19:26.346 [2024-12-10 03:07:20.612895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.346 [2024-12-10 03:07:20.635619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.346 [2024-12-10 03:07:20.635711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:26.346 [2024-12-10 03:07:20.635766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.581 ms 00:19:26.346 [2024-12-10 03:07:20.635808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.346 [2024-12-10 03:07:20.635974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.346 [2024-12-10 03:07:20.636026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:26.346 [2024-12-10 03:07:20.636073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:26.346 [2024-12-10 03:07:20.636116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.346 [2024-12-10 03:07:20.636217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.346 [2024-12-10 03:07:20.636320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:26.346 [2024-12-10 03:07:20.636391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:26.346 [2024-12-10 03:07:20.636494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.346 [2024-12-10 03:07:20.637315] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:26.347 [2024-12-10 03:07:20.640563] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2864.193 ms, result 0 00:19:26.347 [2024-12-10 03:07:20.641817] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:26.347 { 00:19:26.347 "name": "ftl0", 00:19:26.347 "uuid": "fa9db8c7-13a0-427a-b0c5-b6bfa89fe0fd" 00:19:26.347 } 00:19:26.347 03:07:20 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:26.347 03:07:20 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:26.347 03:07:20 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:26.347 03:07:20 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:26.347 03:07:20 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:26.347 03:07:20 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:26.347 03:07:20 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:26.604 03:07:20 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:26.863 [ 00:19:26.863 { 00:19:26.863 "name": "ftl0", 00:19:26.863 "aliases": [ 00:19:26.863 "fa9db8c7-13a0-427a-b0c5-b6bfa89fe0fd" 00:19:26.863 ], 00:19:26.863 "product_name": "FTL disk", 00:19:26.863 "block_size": 4096, 00:19:26.863 "num_blocks": 23592960, 00:19:26.863 "uuid": "fa9db8c7-13a0-427a-b0c5-b6bfa89fe0fd", 00:19:26.863 "assigned_rate_limits": { 00:19:26.863 "rw_ios_per_sec": 0, 00:19:26.863 "rw_mbytes_per_sec": 0, 00:19:26.863 "r_mbytes_per_sec": 0, 00:19:26.863 "w_mbytes_per_sec": 0 00:19:26.863 }, 00:19:26.863 "claimed": false, 00:19:26.863 "zoned": false, 00:19:26.863 "supported_io_types": { 00:19:26.863 "read": true, 00:19:26.863 "write": true, 00:19:26.863 "unmap": true, 00:19:26.863 "flush": true, 00:19:26.863 "reset": false, 00:19:26.863 "nvme_admin": false, 00:19:26.863 "nvme_io": false, 00:19:26.863 "nvme_io_md": false, 00:19:26.863 "write_zeroes": true, 00:19:26.863 "zcopy": false, 00:19:26.863 "get_zone_info": false, 00:19:26.863 "zone_management": false, 00:19:26.863 "zone_append": false, 00:19:26.863 "compare": false, 00:19:26.863 "compare_and_write": false, 00:19:26.863 "abort": false, 00:19:26.863 "seek_hole": false, 00:19:26.863 "seek_data": false, 00:19:26.863 "copy": false, 00:19:26.863 "nvme_iov_md": false 00:19:26.863 }, 00:19:26.863 "driver_specific": { 00:19:26.863 "ftl": { 00:19:26.863 "base_bdev": "c4405cf8-08ac-47bd-8f4a-166328c64538", 00:19:26.863 "cache": "nvc0n1p0" 00:19:26.863 } 00:19:26.863 } 00:19:26.863 } 00:19:26.863 ] 00:19:26.863 03:07:21 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:26.863 03:07:21 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:26.863 03:07:21 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:27.121 03:07:21 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:27.121 03:07:21 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:27.121 03:07:21 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:27.121 { 00:19:27.121 "name": "ftl0", 00:19:27.121 "aliases": [ 00:19:27.121 "fa9db8c7-13a0-427a-b0c5-b6bfa89fe0fd" 00:19:27.121 ], 00:19:27.121 "product_name": "FTL disk", 00:19:27.121 "block_size": 4096, 00:19:27.121 "num_blocks": 23592960, 00:19:27.121 "uuid": "fa9db8c7-13a0-427a-b0c5-b6bfa89fe0fd", 00:19:27.121 "assigned_rate_limits": { 00:19:27.121 "rw_ios_per_sec": 0, 00:19:27.121 "rw_mbytes_per_sec": 0, 00:19:27.121 "r_mbytes_per_sec": 0, 00:19:27.121 "w_mbytes_per_sec": 0 00:19:27.121 }, 00:19:27.121 "claimed": false, 00:19:27.121 "zoned": false, 00:19:27.121 "supported_io_types": { 00:19:27.121 "read": true, 00:19:27.121 "write": true, 00:19:27.121 "unmap": true, 00:19:27.121 "flush": true, 00:19:27.121 "reset": false, 00:19:27.121 "nvme_admin": false, 00:19:27.121 "nvme_io": false, 00:19:27.121 "nvme_io_md": false, 00:19:27.121 "write_zeroes": true, 00:19:27.121 "zcopy": false, 00:19:27.121 "get_zone_info": false, 00:19:27.121 "zone_management": false, 00:19:27.121 "zone_append": false, 00:19:27.121 "compare": false, 00:19:27.121 "compare_and_write": false, 00:19:27.121 "abort": false, 00:19:27.121 "seek_hole": false, 00:19:27.121 "seek_data": false, 00:19:27.121 "copy": false, 00:19:27.121 "nvme_iov_md": false 00:19:27.121 }, 00:19:27.121 "driver_specific": { 00:19:27.121 "ftl": { 00:19:27.121 "base_bdev": "c4405cf8-08ac-47bd-8f4a-166328c64538", 00:19:27.121 "cache": "nvc0n1p0" 00:19:27.121 } 00:19:27.121 } 00:19:27.121 } 00:19:27.121 ]' 00:19:27.121 03:07:21 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:27.121 03:07:21 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:27.121 03:07:21 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:27.381 [2024-12-10 03:07:21.661044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.382 [2024-12-10 03:07:21.661471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:27.382 [2024-12-10 03:07:21.661556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:27.382 [2024-12-10 03:07:21.661602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.382 [2024-12-10 03:07:21.661696] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:27.382 [2024-12-10 03:07:21.664336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.382 [2024-12-10 03:07:21.664438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:27.382 [2024-12-10 03:07:21.664482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.528 ms 00:19:27.382 [2024-12-10 03:07:21.664587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.382 [2024-12-10 03:07:21.665105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.382 [2024-12-10 03:07:21.665212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:27.382 [2024-12-10 03:07:21.665288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:19:27.382 [2024-12-10 03:07:21.665405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.382 [2024-12-10 03:07:21.669102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.382 [2024-12-10 03:07:21.669215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:27.382 [2024-12-10 03:07:21.669312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:19:27.382 [2024-12-10 03:07:21.669422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.382 [2024-12-10 03:07:21.676646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.382 [2024-12-10 03:07:21.676787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:27.382 [2024-12-10 03:07:21.676891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.135 ms 00:19:27.382 [2024-12-10 03:07:21.676971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.382 [2024-12-10 03:07:21.700561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.382 [2024-12-10 03:07:21.700704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:27.382 [2024-12-10 03:07:21.700776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.472 ms 00:19:27.382 [2024-12-10 03:07:21.700817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.382 [2024-12-10 03:07:21.715979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.382 [2024-12-10 03:07:21.716136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:27.382 [2024-12-10 03:07:21.716193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.043 ms 00:19:27.382 [2024-12-10 03:07:21.716238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.382 [2024-12-10 03:07:21.716473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.382 [2024-12-10 03:07:21.716591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:27.382 [2024-12-10 03:07:21.716654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:19:27.382 [2024-12-10 03:07:21.716708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.382 [2024-12-10 03:07:21.739441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.382 [2024-12-10 03:07:21.739583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:27.382 [2024-12-10 03:07:21.739634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.603 ms 00:19:27.382 [2024-12-10 03:07:21.739676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.382 [2024-12-10 03:07:21.762059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.382 [2024-12-10 03:07:21.762216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:27.382 [2024-12-10 03:07:21.762239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.301 ms 00:19:27.382 [2024-12-10 03:07:21.762246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.643 [2024-12-10 03:07:21.784577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.643 [2024-12-10 03:07:21.784723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:27.643 [2024-12-10 03:07:21.784785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.238 ms 00:19:27.643 [2024-12-10 03:07:21.784822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.643 [2024-12-10 03:07:21.806814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.643 [2024-12-10 03:07:21.806961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:27.643 [2024-12-10 03:07:21.807007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.866 ms 00:19:27.643 [2024-12-10 03:07:21.807046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.643 [2024-12-10 03:07:21.807401] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:27.644 [2024-12-10 03:07:21.807576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.807670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.807772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.807880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.807962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.808913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.809937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.810975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.811992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.812032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.812124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.812167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.812207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.812244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.812285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.812371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.812438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:27.644 [2024-12-10 03:07:21.812448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:27.645 [2024-12-10 03:07:21.812458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:27.645 [2024-12-10 03:07:21.812465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:27.645 [2024-12-10 03:07:21.812474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:27.645 [2024-12-10 03:07:21.812481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:27.645 [2024-12-10 03:07:21.812490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:27.645 [2024-12-10 03:07:21.812498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:27.645 [2024-12-10 03:07:21.812507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:27.645 [2024-12-10 03:07:21.812514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:27.645 [2024-12-10 03:07:21.812523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:27.645 [2024-12-10 03:07:21.812530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:27.645 [2024-12-10 03:07:21.812539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:27.645 [2024-12-10 03:07:21.812546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:27.645 [2024-12-10 03:07:21.812556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:27.645 [2024-12-10 03:07:21.812572] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:27.645 [2024-12-10 03:07:21.812583] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa9db8c7-13a0-427a-b0c5-b6bfa89fe0fd 00:19:27.645 [2024-12-10 03:07:21.812592] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:27.645 [2024-12-10 03:07:21.812600] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:27.645 [2024-12-10 03:07:21.812607] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:27.645 [2024-12-10 03:07:21.812623] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:27.645 [2024-12-10 03:07:21.812630] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:27.645 [2024-12-10 03:07:21.812639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:27.645 [2024-12-10 03:07:21.812646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:27.645 [2024-12-10 03:07:21.812654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:27.645 [2024-12-10 03:07:21.812660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:27.645 [2024-12-10 03:07:21.812670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.645 [2024-12-10 03:07:21.812678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:27.645 [2024-12-10 03:07:21.812688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.295 ms 00:19:27.645 [2024-12-10 03:07:21.812695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.645 [2024-12-10 03:07:21.825745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.645 [2024-12-10 03:07:21.825884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:27.645 [2024-12-10 03:07:21.825951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.989 ms 00:19:27.645 [2024-12-10 03:07:21.825993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.645 [2024-12-10 03:07:21.826425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.645 [2024-12-10 03:07:21.826478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:27.645 [2024-12-10 03:07:21.826525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:19:27.645 [2024-12-10 03:07:21.826564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.645 [2024-12-10 03:07:21.870048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.645 [2024-12-10 03:07:21.870207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.645 [2024-12-10 03:07:21.870269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.645 [2024-12-10 03:07:21.870310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.645 [2024-12-10 03:07:21.870443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.645 [2024-12-10 03:07:21.870543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.645 [2024-12-10 03:07:21.870611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.645 [2024-12-10 03:07:21.870658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.645 [2024-12-10 03:07:21.870743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.645 [2024-12-10 03:07:21.870811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.645 [2024-12-10 03:07:21.870923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.645 [2024-12-10 03:07:21.870963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.645 [2024-12-10 03:07:21.871014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.645 [2024-12-10 03:07:21.871094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.645 [2024-12-10 03:07:21.871148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.645 [2024-12-10 03:07:21.871180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.645 [2024-12-10 03:07:21.950152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.645 [2024-12-10 03:07:21.950300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.645 [2024-12-10 03:07:21.950347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.645 [2024-12-10 03:07:21.950506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.645 [2024-12-10 03:07:22.011730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.645 [2024-12-10 03:07:22.011949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.645 [2024-12-10 03:07:22.012018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.645 [2024-12-10 03:07:22.012058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.645 [2024-12-10 03:07:22.012171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.645 [2024-12-10 03:07:22.012286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.645 [2024-12-10 03:07:22.012351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.645 [2024-12-10 03:07:22.012417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.645 [2024-12-10 03:07:22.012547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.645 [2024-12-10 03:07:22.012601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.645 [2024-12-10 03:07:22.012643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.645 [2024-12-10 03:07:22.012679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.645 [2024-12-10 03:07:22.012815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.645 [2024-12-10 03:07:22.012921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.645 [2024-12-10 03:07:22.012991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.645 [2024-12-10 03:07:22.013044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.645 [2024-12-10 03:07:22.013216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.645 [2024-12-10 03:07:22.013254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:27.645 [2024-12-10 03:07:22.013298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.645 [2024-12-10 03:07:22.013329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.645 [2024-12-10 03:07:22.013417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.645 [2024-12-10 03:07:22.013513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.645 [2024-12-10 03:07:22.013620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.645 [2024-12-10 03:07:22.013661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.645 [2024-12-10 03:07:22.013755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.645 [2024-12-10 03:07:22.013768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.645 [2024-12-10 03:07:22.013778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.645 [2024-12-10 03:07:22.013785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.645 [2024-12-10 03:07:22.013958] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 352.894 ms, result 0 00:19:27.645 true 00:19:27.907 03:07:22 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76335 00:19:27.907 03:07:22 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76335 ']' 00:19:27.907 03:07:22 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76335 00:19:27.907 03:07:22 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:27.907 03:07:22 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.907 03:07:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76335 00:19:27.907 03:07:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.907 03:07:22 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.907 killing process with pid 76335 00:19:27.907 03:07:22 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76335' 00:19:27.907 03:07:22 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76335 00:19:27.907 03:07:22 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76335 00:19:34.517 03:07:27 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:34.517 65536+0 records in 00:19:34.517 65536+0 records out 00:19:34.517 268435456 bytes (268 MB, 256 MiB) copied, 1.06788 s, 251 MB/s 00:19:34.517 03:07:28 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:34.517 [2024-12-10 03:07:28.788517] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:34.517 [2024-12-10 03:07:28.788745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76523 ] 00:19:34.776 [2024-12-10 03:07:28.942589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.776 [2024-12-10 03:07:29.035128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.036 [2024-12-10 03:07:29.289591] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.036 [2024-12-10 03:07:29.289776] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.295 [2024-12-10 03:07:29.443827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.295 [2024-12-10 03:07:29.443985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:35.295 [2024-12-10 03:07:29.444086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.295 [2024-12-10 03:07:29.444111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.295 [2024-12-10 03:07:29.446740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.295 [2024-12-10 03:07:29.446857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.295 [2024-12-10 03:07:29.446918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:19:35.295 [2024-12-10 03:07:29.446941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.295 [2024-12-10 03:07:29.447027] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:35.295 [2024-12-10 03:07:29.447910] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:35.295 [2024-12-10 03:07:29.448018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.295 [2024-12-10 03:07:29.448074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.295 [2024-12-10 03:07:29.448097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:19:35.295 [2024-12-10 03:07:29.448115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.295 [2024-12-10 03:07:29.449318] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:35.295 [2024-12-10 03:07:29.461358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.295 [2024-12-10 03:07:29.461478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:35.295 [2024-12-10 03:07:29.461535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.042 ms 00:19:35.295 [2024-12-10 03:07:29.461558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.295 [2024-12-10 03:07:29.461659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.295 [2024-12-10 03:07:29.461690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:35.295 [2024-12-10 03:07:29.461711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:35.295 [2024-12-10 03:07:29.461790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.295 [2024-12-10 03:07:29.466513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.295 [2024-12-10 03:07:29.466610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.295 [2024-12-10 03:07:29.466658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.664 ms 00:19:35.295 [2024-12-10 03:07:29.466679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.295 [2024-12-10 03:07:29.466772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.295 [2024-12-10 03:07:29.466857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.295 [2024-12-10 03:07:29.466881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:35.295 [2024-12-10 03:07:29.466899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.295 [2024-12-10 03:07:29.466939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.295 [2024-12-10 03:07:29.466960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:35.295 [2024-12-10 03:07:29.467016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:35.296 [2024-12-10 03:07:29.467037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.296 [2024-12-10 03:07:29.467072] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:35.296 [2024-12-10 03:07:29.470368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.296 [2024-12-10 03:07:29.470465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.296 [2024-12-10 03:07:29.470518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.301 ms 00:19:35.296 [2024-12-10 03:07:29.470539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.296 [2024-12-10 03:07:29.470588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.296 [2024-12-10 03:07:29.470709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:35.296 [2024-12-10 03:07:29.470732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:35.296 [2024-12-10 03:07:29.470751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.296 [2024-12-10 03:07:29.470798] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:35.296 [2024-12-10 03:07:29.470841] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:35.296 [2024-12-10 03:07:29.471007] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:35.296 [2024-12-10 03:07:29.471044] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:35.296 [2024-12-10 03:07:29.471199] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:35.296 [2024-12-10 03:07:29.471267] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:35.296 [2024-12-10 03:07:29.471323] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:35.296 [2024-12-10 03:07:29.471359] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:35.296 [2024-12-10 03:07:29.471430] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:35.296 [2024-12-10 03:07:29.471463] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:35.296 [2024-12-10 03:07:29.471481] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:35.296 [2024-12-10 03:07:29.471522] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:35.296 [2024-12-10 03:07:29.471543] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:35.296 [2024-12-10 03:07:29.471591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.296 [2024-12-10 03:07:29.471613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:35.296 [2024-12-10 03:07:29.471633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:19:35.296 [2024-12-10 03:07:29.471671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.296 [2024-12-10 03:07:29.471788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.296 [2024-12-10 03:07:29.471820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:35.296 [2024-12-10 03:07:29.471867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:35.296 [2024-12-10 03:07:29.471888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.296 [2024-12-10 03:07:29.472036] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:35.296 [2024-12-10 03:07:29.472062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:35.296 [2024-12-10 03:07:29.472082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.296 [2024-12-10 03:07:29.472132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.296 [2024-12-10 03:07:29.472154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:35.296 [2024-12-10 03:07:29.472171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:35.296 [2024-12-10 03:07:29.472212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:35.296 [2024-12-10 03:07:29.472233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:35.296 [2024-12-10 03:07:29.472251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:35.296 [2024-12-10 03:07:29.472269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.296 [2024-12-10 03:07:29.472288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:35.296 [2024-12-10 03:07:29.472315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:35.296 [2024-12-10 03:07:29.472369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.296 [2024-12-10 03:07:29.472400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:35.296 [2024-12-10 03:07:29.472419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:35.296 [2024-12-10 03:07:29.472437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.296 [2024-12-10 03:07:29.472454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:35.296 [2024-12-10 03:07:29.472472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:35.296 [2024-12-10 03:07:29.472489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.296 [2024-12-10 03:07:29.472534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:35.296 [2024-12-10 03:07:29.472556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:35.296 [2024-12-10 03:07:29.472574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.296 [2024-12-10 03:07:29.472591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:35.296 [2024-12-10 03:07:29.472609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:35.296 [2024-12-10 03:07:29.472627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.296 [2024-12-10 03:07:29.472645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:35.296 [2024-12-10 03:07:29.472662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:35.296 [2024-12-10 03:07:29.472680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.296 [2024-12-10 03:07:29.472697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:35.296 [2024-12-10 03:07:29.472715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:35.296 [2024-12-10 03:07:29.472732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.296 [2024-12-10 03:07:29.472749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:35.296 [2024-12-10 03:07:29.472767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:35.296 [2024-12-10 03:07:29.472784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.296 [2024-12-10 03:07:29.472802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:35.296 [2024-12-10 03:07:29.472820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:35.296 [2024-12-10 03:07:29.472837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.296 [2024-12-10 03:07:29.472855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:35.296 [2024-12-10 03:07:29.472872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:35.296 [2024-12-10 03:07:29.472930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.296 [2024-12-10 03:07:29.472952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:35.296 [2024-12-10 03:07:29.473022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:35.296 [2024-12-10 03:07:29.473043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.296 [2024-12-10 03:07:29.473061] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:35.296 [2024-12-10 03:07:29.473081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:35.296 [2024-12-10 03:07:29.473118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.296 [2024-12-10 03:07:29.473238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.296 [2024-12-10 03:07:29.473308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:35.296 [2024-12-10 03:07:29.473330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:35.296 [2024-12-10 03:07:29.473348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:35.296 [2024-12-10 03:07:29.473367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:35.296 [2024-12-10 03:07:29.473437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:35.296 [2024-12-10 03:07:29.473459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:35.296 [2024-12-10 03:07:29.473468] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:35.296 [2024-12-10 03:07:29.473478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.296 [2024-12-10 03:07:29.473486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:35.296 [2024-12-10 03:07:29.473494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:35.296 [2024-12-10 03:07:29.473501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:35.296 [2024-12-10 03:07:29.473508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:35.296 [2024-12-10 03:07:29.473515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:35.296 [2024-12-10 03:07:29.473522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:35.296 [2024-12-10 03:07:29.473529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:35.296 [2024-12-10 03:07:29.473536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:35.296 [2024-12-10 03:07:29.473543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:35.296 [2024-12-10 03:07:29.473550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:35.296 [2024-12-10 03:07:29.473558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:35.296 [2024-12-10 03:07:29.473564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:35.296 [2024-12-10 03:07:29.473571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:35.296 [2024-12-10 03:07:29.473579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:35.296 [2024-12-10 03:07:29.473586] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:35.296 [2024-12-10 03:07:29.473594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.297 [2024-12-10 03:07:29.473602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:35.297 [2024-12-10 03:07:29.473609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:35.297 [2024-12-10 03:07:29.473616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:35.297 [2024-12-10 03:07:29.473623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:35.297 [2024-12-10 03:07:29.473630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.297 [2024-12-10 03:07:29.473641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:35.297 [2024-12-10 03:07:29.473648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.662 ms 00:19:35.297 [2024-12-10 03:07:29.473655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.297 [2024-12-10 03:07:29.500039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.297 [2024-12-10 03:07:29.500078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.297 [2024-12-10 03:07:29.500091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.324 ms 00:19:35.297 [2024-12-10 03:07:29.500099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.297 [2024-12-10 03:07:29.500226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.297 [2024-12-10 03:07:29.500237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:35.297 [2024-12-10 03:07:29.500245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:35.297 [2024-12-10 03:07:29.500252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.297 [2024-12-10 03:07:29.545536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.297 [2024-12-10 03:07:29.545576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.297 [2024-12-10 03:07:29.545591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.263 ms 00:19:35.297 [2024-12-10 03:07:29.545599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.297 [2024-12-10 03:07:29.545695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.297 [2024-12-10 03:07:29.545707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.297 [2024-12-10 03:07:29.545716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:35.297 [2024-12-10 03:07:29.545723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.297 [2024-12-10 03:07:29.546027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.297 [2024-12-10 03:07:29.546041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.297 [2024-12-10 03:07:29.546055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:19:35.297 [2024-12-10 03:07:29.546062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.297 [2024-12-10 03:07:29.546187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.297 [2024-12-10 03:07:29.546196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.297 [2024-12-10 03:07:29.546204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:35.297 [2024-12-10 03:07:29.546211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.297 [2024-12-10 03:07:29.559253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.297 [2024-12-10 03:07:29.559284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.297 [2024-12-10 03:07:29.559295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.022 ms 00:19:35.297 [2024-12-10 03:07:29.559303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.297 [2024-12-10 03:07:29.571480] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:35.297 [2024-12-10 03:07:29.571514] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:35.297 [2024-12-10 03:07:29.571526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.297 [2024-12-10 03:07:29.571534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:35.297 [2024-12-10 03:07:29.571543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.108 ms 00:19:35.297 [2024-12-10 03:07:29.571550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.297 [2024-12-10 03:07:29.595709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.297 [2024-12-10 03:07:29.595745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:35.297 [2024-12-10 03:07:29.595755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.030 ms 00:19:35.297 [2024-12-10 03:07:29.595763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.297 [2024-12-10 03:07:29.607100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.297 [2024-12-10 03:07:29.607132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:35.297 [2024-12-10 03:07:29.607141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.270 ms 00:19:35.297 [2024-12-10 03:07:29.607148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.297 [2024-12-10 03:07:29.618184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.297 [2024-12-10 03:07:29.618216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:35.297 [2024-12-10 03:07:29.618225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.977 ms 00:19:35.297 [2024-12-10 03:07:29.618232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.297 [2024-12-10 03:07:29.618831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.297 [2024-12-10 03:07:29.618850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:35.297 [2024-12-10 03:07:29.618859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:19:35.297 [2024-12-10 03:07:29.618866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.297 [2024-12-10 03:07:29.673148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.297 [2024-12-10 03:07:29.673192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:35.297 [2024-12-10 03:07:29.673205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.259 ms 00:19:35.297 [2024-12-10 03:07:29.673213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.558 [2024-12-10 03:07:29.683468] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:35.558 [2024-12-10 03:07:29.696668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.558 [2024-12-10 03:07:29.696702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:35.558 [2024-12-10 03:07:29.696713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.369 ms 00:19:35.558 [2024-12-10 03:07:29.696721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.558 [2024-12-10 03:07:29.696797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.558 [2024-12-10 03:07:29.696808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:35.558 [2024-12-10 03:07:29.696817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:35.558 [2024-12-10 03:07:29.696824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.558 [2024-12-10 03:07:29.696869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.558 [2024-12-10 03:07:29.696877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:35.558 [2024-12-10 03:07:29.696885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:35.558 [2024-12-10 03:07:29.696892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.558 [2024-12-10 03:07:29.696923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.558 [2024-12-10 03:07:29.696934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:35.558 [2024-12-10 03:07:29.696941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:35.558 [2024-12-10 03:07:29.696948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.558 [2024-12-10 03:07:29.696977] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:35.558 [2024-12-10 03:07:29.696986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.558 [2024-12-10 03:07:29.696994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:35.558 [2024-12-10 03:07:29.697001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:35.558 [2024-12-10 03:07:29.697008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.558 [2024-12-10 03:07:29.719668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.558 [2024-12-10 03:07:29.719703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:35.558 [2024-12-10 03:07:29.719715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.639 ms 00:19:35.558 [2024-12-10 03:07:29.719723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.558 [2024-12-10 03:07:29.719804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.558 [2024-12-10 03:07:29.719814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:35.558 [2024-12-10 03:07:29.719822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:35.558 [2024-12-10 03:07:29.719830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.558 [2024-12-10 03:07:29.720630] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:35.558 [2024-12-10 03:07:29.723631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 276.531 ms, result 0 00:19:35.558 [2024-12-10 03:07:29.724338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:35.558 [2024-12-10 03:07:29.737034] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:36.500  [2024-12-10T03:07:31.831Z] Copying: 31/256 [MB] (31 MBps) [2024-12-10T03:07:32.776Z] Copying: 57/256 [MB] (25 MBps) [2024-12-10T03:07:34.161Z] Copying: 87/256 [MB] (29 MBps) [2024-12-10T03:07:35.104Z] Copying: 114/256 [MB] (26 MBps) [2024-12-10T03:07:36.044Z] Copying: 132/256 [MB] (18 MBps) [2024-12-10T03:07:36.988Z] Copying: 148/256 [MB] (16 MBps) [2024-12-10T03:07:37.930Z] Copying: 161/256 [MB] (12 MBps) [2024-12-10T03:07:38.874Z] Copying: 174/256 [MB] (12 MBps) [2024-12-10T03:07:39.818Z] Copying: 192/256 [MB] (18 MBps) [2024-12-10T03:07:40.760Z] Copying: 216/256 [MB] (24 MBps) [2024-12-10T03:07:41.701Z] Copying: 242/256 [MB] (26 MBps) [2024-12-10T03:07:41.701Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-10 03:07:41.383724] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:47.313 [2024-12-10 03:07:41.392791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.313 [2024-12-10 03:07:41.392829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:47.313 [2024-12-10 03:07:41.392842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:47.313 [2024-12-10 03:07:41.392855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.313 [2024-12-10 03:07:41.392876] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:47.313 [2024-12-10 03:07:41.395488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.313 [2024-12-10 03:07:41.395516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:47.313 [2024-12-10 03:07:41.395527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.598 ms 00:19:47.313 [2024-12-10 03:07:41.395535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.313 [2024-12-10 03:07:41.397454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.313 [2024-12-10 03:07:41.397489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:47.313 [2024-12-10 03:07:41.397498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.898 ms 00:19:47.313 [2024-12-10 03:07:41.397506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.313 [2024-12-10 03:07:41.404468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.313 [2024-12-10 03:07:41.404505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:47.314 [2024-12-10 03:07:41.404514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.945 ms 00:19:47.314 [2024-12-10 03:07:41.404522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.314 [2024-12-10 03:07:41.411396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.314 [2024-12-10 03:07:41.411426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:47.314 [2024-12-10 03:07:41.411435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.833 ms 00:19:47.314 [2024-12-10 03:07:41.411443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.314 [2024-12-10 03:07:41.434163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.314 [2024-12-10 03:07:41.434206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:47.314 [2024-12-10 03:07:41.434217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.682 ms 00:19:47.314 [2024-12-10 03:07:41.434224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.314 [2024-12-10 03:07:41.448410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.314 [2024-12-10 03:07:41.448447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:47.314 [2024-12-10 03:07:41.448460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.152 ms 00:19:47.314 [2024-12-10 03:07:41.448467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.314 [2024-12-10 03:07:41.448597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.314 [2024-12-10 03:07:41.448608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:47.314 [2024-12-10 03:07:41.448616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:47.314 [2024-12-10 03:07:41.448630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.314 [2024-12-10 03:07:41.471863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.314 [2024-12-10 03:07:41.471894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:47.314 [2024-12-10 03:07:41.471910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.218 ms 00:19:47.314 [2024-12-10 03:07:41.471917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.314 [2024-12-10 03:07:41.494790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.314 [2024-12-10 03:07:41.494820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:47.314 [2024-12-10 03:07:41.494830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.841 ms 00:19:47.314 [2024-12-10 03:07:41.494837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.314 [2024-12-10 03:07:41.516890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.314 [2024-12-10 03:07:41.516922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:47.314 [2024-12-10 03:07:41.516931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.020 ms 00:19:47.314 [2024-12-10 03:07:41.516937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.314 [2024-12-10 03:07:41.538592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.314 [2024-12-10 03:07:41.538629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:47.314 [2024-12-10 03:07:41.538639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.598 ms 00:19:47.314 [2024-12-10 03:07:41.538645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.314 [2024-12-10 03:07:41.538677] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:47.314 [2024-12-10 03:07:41.538693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.538997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:47.314 [2024-12-10 03:07:41.539143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:47.315 [2024-12-10 03:07:41.539448] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:47.315 [2024-12-10 03:07:41.539456] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa9db8c7-13a0-427a-b0c5-b6bfa89fe0fd 00:19:47.315 [2024-12-10 03:07:41.539464] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:47.315 [2024-12-10 03:07:41.539471] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:47.315 [2024-12-10 03:07:41.539478] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:47.315 [2024-12-10 03:07:41.539485] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:47.315 [2024-12-10 03:07:41.539492] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:47.315 [2024-12-10 03:07:41.539499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:47.315 [2024-12-10 03:07:41.539506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:47.315 [2024-12-10 03:07:41.539512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:47.315 [2024-12-10 03:07:41.539519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:47.315 [2024-12-10 03:07:41.539525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.315 [2024-12-10 03:07:41.539535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:47.315 [2024-12-10 03:07:41.539542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:19:47.315 [2024-12-10 03:07:41.539549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.315 [2024-12-10 03:07:41.551781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.315 [2024-12-10 03:07:41.551811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:47.315 [2024-12-10 03:07:41.551821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.215 ms 00:19:47.315 [2024-12-10 03:07:41.551828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.315 [2024-12-10 03:07:41.552186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.315 [2024-12-10 03:07:41.552206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:47.315 [2024-12-10 03:07:41.552214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:19:47.315 [2024-12-10 03:07:41.552222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.315 [2024-12-10 03:07:41.586859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.315 [2024-12-10 03:07:41.586893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:47.315 [2024-12-10 03:07:41.586902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.315 [2024-12-10 03:07:41.586910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.315 [2024-12-10 03:07:41.586992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.315 [2024-12-10 03:07:41.587002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:47.315 [2024-12-10 03:07:41.587010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.315 [2024-12-10 03:07:41.587017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.315 [2024-12-10 03:07:41.587055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.315 [2024-12-10 03:07:41.587064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:47.315 [2024-12-10 03:07:41.587071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.315 [2024-12-10 03:07:41.587078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.315 [2024-12-10 03:07:41.587094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.315 [2024-12-10 03:07:41.587104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:47.315 [2024-12-10 03:07:41.587111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.315 [2024-12-10 03:07:41.587118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.315 [2024-12-10 03:07:41.662783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.315 [2024-12-10 03:07:41.662823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:47.315 [2024-12-10 03:07:41.662833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.315 [2024-12-10 03:07:41.662842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.573 [2024-12-10 03:07:41.724620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.573 [2024-12-10 03:07:41.724659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:47.573 [2024-12-10 03:07:41.724669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.573 [2024-12-10 03:07:41.724676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.573 [2024-12-10 03:07:41.724741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.573 [2024-12-10 03:07:41.724750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:47.573 [2024-12-10 03:07:41.724758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.573 [2024-12-10 03:07:41.724766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.573 [2024-12-10 03:07:41.724793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.573 [2024-12-10 03:07:41.724801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:47.573 [2024-12-10 03:07:41.724811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.573 [2024-12-10 03:07:41.724819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.573 [2024-12-10 03:07:41.724904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.573 [2024-12-10 03:07:41.724913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:47.573 [2024-12-10 03:07:41.724921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.573 [2024-12-10 03:07:41.724928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.573 [2024-12-10 03:07:41.724956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.573 [2024-12-10 03:07:41.724964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:47.573 [2024-12-10 03:07:41.724972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.573 [2024-12-10 03:07:41.724981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.573 [2024-12-10 03:07:41.725017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.573 [2024-12-10 03:07:41.725025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:47.573 [2024-12-10 03:07:41.725033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.573 [2024-12-10 03:07:41.725040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.573 [2024-12-10 03:07:41.725078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.573 [2024-12-10 03:07:41.725088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:47.573 [2024-12-10 03:07:41.725098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.573 [2024-12-10 03:07:41.725105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.573 [2024-12-10 03:07:41.725229] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.429 ms, result 0 00:19:48.507 00:19:48.507 00:19:48.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.507 03:07:42 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76690 00:19:48.507 03:07:42 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:48.507 03:07:42 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76690 00:19:48.507 03:07:42 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76690 ']' 00:19:48.507 03:07:42 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.507 03:07:42 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.507 03:07:42 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.507 03:07:42 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.507 03:07:42 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:48.507 [2024-12-10 03:07:42.768052] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:48.507 [2024-12-10 03:07:42.768309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76690 ] 00:19:48.766 [2024-12-10 03:07:42.927115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.766 [2024-12-10 03:07:43.021032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.335 03:07:43 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.335 03:07:43 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:49.335 03:07:43 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:49.593 [2024-12-10 03:07:43.802032] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.593 [2024-12-10 03:07:43.802092] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.593 [2024-12-10 03:07:43.972732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.593 [2024-12-10 03:07:43.972777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:49.593 [2024-12-10 03:07:43.972791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:49.593 [2024-12-10 03:07:43.972799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-12-10 03:07:43.975407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.852 [2024-12-10 03:07:43.975442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:49.852 [2024-12-10 03:07:43.975453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.588 ms 00:19:49.852 [2024-12-10 03:07:43.975460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-12-10 03:07:43.975568] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:49.852 [2024-12-10 03:07:43.976257] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:49.852 [2024-12-10 03:07:43.976285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.852 [2024-12-10 03:07:43.976293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:49.852 [2024-12-10 03:07:43.976303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:19:49.852 [2024-12-10 03:07:43.976311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-12-10 03:07:43.977448] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:49.852 [2024-12-10 03:07:43.989600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.852 [2024-12-10 03:07:43.989641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:49.852 [2024-12-10 03:07:43.989653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.155 ms 00:19:49.852 [2024-12-10 03:07:43.989663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-12-10 03:07:43.989772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.852 [2024-12-10 03:07:43.989785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:49.852 [2024-12-10 03:07:43.989793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:49.852 [2024-12-10 03:07:43.989802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-12-10 03:07:43.994534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.852 [2024-12-10 03:07:43.994572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:49.852 [2024-12-10 03:07:43.994581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.685 ms 00:19:49.852 [2024-12-10 03:07:43.994590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-12-10 03:07:43.994682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.852 [2024-12-10 03:07:43.994694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:49.852 [2024-12-10 03:07:43.994702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:49.852 [2024-12-10 03:07:43.994714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-12-10 03:07:43.994738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.852 [2024-12-10 03:07:43.994748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:49.852 [2024-12-10 03:07:43.994755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:49.852 [2024-12-10 03:07:43.994763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-12-10 03:07:43.994785] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:49.852 [2024-12-10 03:07:43.998120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.852 [2024-12-10 03:07:43.998151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:49.852 [2024-12-10 03:07:43.998161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.337 ms 00:19:49.852 [2024-12-10 03:07:43.998168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-12-10 03:07:43.998205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.852 [2024-12-10 03:07:43.998214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:49.852 [2024-12-10 03:07:43.998223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:49.852 [2024-12-10 03:07:43.998235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-12-10 03:07:43.998255] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:49.852 [2024-12-10 03:07:43.998273] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:49.852 [2024-12-10 03:07:43.998314] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:49.852 [2024-12-10 03:07:43.998328] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:49.852 [2024-12-10 03:07:43.998440] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:49.852 [2024-12-10 03:07:43.998451] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:49.852 [2024-12-10 03:07:43.998465] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:49.852 [2024-12-10 03:07:43.998475] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:49.852 [2024-12-10 03:07:43.998485] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:49.852 [2024-12-10 03:07:43.998493] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:49.852 [2024-12-10 03:07:43.998503] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:49.852 [2024-12-10 03:07:43.998510] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:49.852 [2024-12-10 03:07:43.998520] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:49.852 [2024-12-10 03:07:43.998527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.852 [2024-12-10 03:07:43.998535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:49.852 [2024-12-10 03:07:43.998543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:19:49.852 [2024-12-10 03:07:43.998552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-12-10 03:07:43.998640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.852 [2024-12-10 03:07:43.998650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:49.852 [2024-12-10 03:07:43.998658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:49.852 [2024-12-10 03:07:43.998666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-12-10 03:07:43.998774] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:49.853 [2024-12-10 03:07:43.998786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:49.853 [2024-12-10 03:07:43.998794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.853 [2024-12-10 03:07:43.998803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.853 [2024-12-10 03:07:43.998810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:49.853 [2024-12-10 03:07:43.998820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:49.853 [2024-12-10 03:07:43.998827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:49.853 [2024-12-10 03:07:43.998837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:49.853 [2024-12-10 03:07:43.998844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:49.853 [2024-12-10 03:07:43.998852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.853 [2024-12-10 03:07:43.998858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:49.853 [2024-12-10 03:07:43.998866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:49.853 [2024-12-10 03:07:43.998872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.853 [2024-12-10 03:07:43.998880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:49.853 [2024-12-10 03:07:43.998887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:49.853 [2024-12-10 03:07:43.998895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.853 [2024-12-10 03:07:43.998902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:49.853 [2024-12-10 03:07:43.998914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:49.853 [2024-12-10 03:07:43.998925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.853 [2024-12-10 03:07:43.998934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:49.853 [2024-12-10 03:07:43.998940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:49.853 [2024-12-10 03:07:43.998948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.853 [2024-12-10 03:07:43.998955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:49.853 [2024-12-10 03:07:43.998964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:49.853 [2024-12-10 03:07:43.998971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.853 [2024-12-10 03:07:43.998979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:49.853 [2024-12-10 03:07:43.998985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:49.853 [2024-12-10 03:07:43.998993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.853 [2024-12-10 03:07:43.999000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:49.853 [2024-12-10 03:07:43.999009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:49.853 [2024-12-10 03:07:43.999015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.853 [2024-12-10 03:07:43.999023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:49.853 [2024-12-10 03:07:43.999030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:49.853 [2024-12-10 03:07:43.999037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.853 [2024-12-10 03:07:43.999043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:49.853 [2024-12-10 03:07:43.999051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:49.853 [2024-12-10 03:07:43.999057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.853 [2024-12-10 03:07:43.999066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:49.853 [2024-12-10 03:07:43.999072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:49.853 [2024-12-10 03:07:43.999082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.853 [2024-12-10 03:07:43.999088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:49.853 [2024-12-10 03:07:43.999096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:49.853 [2024-12-10 03:07:43.999102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.853 [2024-12-10 03:07:43.999110] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:49.853 [2024-12-10 03:07:43.999118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:49.853 [2024-12-10 03:07:43.999127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.853 [2024-12-10 03:07:43.999134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.853 [2024-12-10 03:07:43.999142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:49.853 [2024-12-10 03:07:43.999149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:49.853 [2024-12-10 03:07:43.999159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:49.853 [2024-12-10 03:07:43.999166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:49.853 [2024-12-10 03:07:43.999174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:49.853 [2024-12-10 03:07:43.999180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:49.853 [2024-12-10 03:07:43.999190] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:49.853 [2024-12-10 03:07:43.999198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.853 [2024-12-10 03:07:43.999210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:49.853 [2024-12-10 03:07:43.999218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:49.853 [2024-12-10 03:07:43.999226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:49.853 [2024-12-10 03:07:43.999233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:49.853 [2024-12-10 03:07:43.999241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:49.853 [2024-12-10 03:07:43.999248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:49.853 [2024-12-10 03:07:43.999256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:49.853 [2024-12-10 03:07:43.999263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:49.853 [2024-12-10 03:07:43.999271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:49.853 [2024-12-10 03:07:43.999278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:49.853 [2024-12-10 03:07:43.999287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:49.853 [2024-12-10 03:07:43.999293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:49.853 [2024-12-10 03:07:43.999302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:49.853 [2024-12-10 03:07:43.999309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:49.853 [2024-12-10 03:07:43.999317] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:49.853 [2024-12-10 03:07:43.999325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.853 [2024-12-10 03:07:43.999336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:49.853 [2024-12-10 03:07:43.999343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:49.853 [2024-12-10 03:07:43.999351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:49.853 [2024-12-10 03:07:43.999358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:49.853 [2024-12-10 03:07:43.999367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.853 [2024-12-10 03:07:43.999383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:49.853 [2024-12-10 03:07:43.999395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:19:49.853 [2024-12-10 03:07:43.999404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.853 [2024-12-10 03:07:44.024793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.853 [2024-12-10 03:07:44.024827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.853 [2024-12-10 03:07:44.024839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.333 ms 00:19:49.853 [2024-12-10 03:07:44.024848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.853 [2024-12-10 03:07:44.024960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.853 [2024-12-10 03:07:44.024970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:49.853 [2024-12-10 03:07:44.024979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:49.853 [2024-12-10 03:07:44.024986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.853 [2024-12-10 03:07:44.054907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.853 [2024-12-10 03:07:44.054941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.853 [2024-12-10 03:07:44.054952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.899 ms 00:19:49.853 [2024-12-10 03:07:44.054959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.853 [2024-12-10 03:07:44.055011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.853 [2024-12-10 03:07:44.055020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.853 [2024-12-10 03:07:44.055029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:49.853 [2024-12-10 03:07:44.055037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.853 [2024-12-10 03:07:44.055346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.853 [2024-12-10 03:07:44.055360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.853 [2024-12-10 03:07:44.055372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:19:49.853 [2024-12-10 03:07:44.055392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.853 [2024-12-10 03:07:44.055513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.853 [2024-12-10 03:07:44.055522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.853 [2024-12-10 03:07:44.055531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:49.853 [2024-12-10 03:07:44.055538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.853 [2024-12-10 03:07:44.069468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.853 [2024-12-10 03:07:44.069499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.854 [2024-12-10 03:07:44.069510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.909 ms 00:19:49.854 [2024-12-10 03:07:44.069518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.854 [2024-12-10 03:07:44.099726] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:49.854 [2024-12-10 03:07:44.099766] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:49.854 [2024-12-10 03:07:44.099781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.854 [2024-12-10 03:07:44.099790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:49.854 [2024-12-10 03:07:44.099801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.158 ms 00:19:49.854 [2024-12-10 03:07:44.099814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.854 [2024-12-10 03:07:44.132827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.854 [2024-12-10 03:07:44.132868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:49.854 [2024-12-10 03:07:44.132882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.938 ms 00:19:49.854 [2024-12-10 03:07:44.132890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.854 [2024-12-10 03:07:44.144255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.854 [2024-12-10 03:07:44.144289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:49.854 [2024-12-10 03:07:44.144303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.283 ms 00:19:49.854 [2024-12-10 03:07:44.144310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.854 [2024-12-10 03:07:44.155801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.854 [2024-12-10 03:07:44.155832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:49.854 [2024-12-10 03:07:44.155844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.425 ms 00:19:49.854 [2024-12-10 03:07:44.155851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.854 [2024-12-10 03:07:44.156467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.854 [2024-12-10 03:07:44.156488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:49.854 [2024-12-10 03:07:44.156499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:19:49.854 [2024-12-10 03:07:44.156506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.854 [2024-12-10 03:07:44.211083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.854 [2024-12-10 03:07:44.211127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:49.854 [2024-12-10 03:07:44.211141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.552 ms 00:19:49.854 [2024-12-10 03:07:44.211149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.854 [2024-12-10 03:07:44.221423] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:50.112 [2024-12-10 03:07:44.235067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.112 [2024-12-10 03:07:44.235105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:50.112 [2024-12-10 03:07:44.235119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.838 ms 00:19:50.112 [2024-12-10 03:07:44.235130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.112 [2024-12-10 03:07:44.235211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.112 [2024-12-10 03:07:44.235223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:50.112 [2024-12-10 03:07:44.235232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:50.112 [2024-12-10 03:07:44.235241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.112 [2024-12-10 03:07:44.235290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.112 [2024-12-10 03:07:44.235300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:50.112 [2024-12-10 03:07:44.235308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:50.112 [2024-12-10 03:07:44.235319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.112 [2024-12-10 03:07:44.235341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.112 [2024-12-10 03:07:44.235350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:50.112 [2024-12-10 03:07:44.235358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:50.112 [2024-12-10 03:07:44.235369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.112 [2024-12-10 03:07:44.235412] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:50.112 [2024-12-10 03:07:44.235425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.112 [2024-12-10 03:07:44.235435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:50.112 [2024-12-10 03:07:44.235444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:50.112 [2024-12-10 03:07:44.235450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.112 [2024-12-10 03:07:44.258509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.112 [2024-12-10 03:07:44.258547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:50.112 [2024-12-10 03:07:44.258560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.032 ms 00:19:50.112 [2024-12-10 03:07:44.258568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.112 [2024-12-10 03:07:44.258653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.112 [2024-12-10 03:07:44.258663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:50.112 [2024-12-10 03:07:44.258673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:50.112 [2024-12-10 03:07:44.258683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.112 [2024-12-10 03:07:44.259774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:50.112 [2024-12-10 03:07:44.262934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 286.767 ms, result 0 00:19:50.112 [2024-12-10 03:07:44.264003] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.112 Some configs were skipped because the RPC state that can call them passed over. 00:19:50.112 03:07:44 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:50.112 [2024-12-10 03:07:44.486543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.112 [2024-12-10 03:07:44.486597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:50.112 [2024-12-10 03:07:44.486610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.635 ms 00:19:50.112 [2024-12-10 03:07:44.486619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.112 [2024-12-10 03:07:44.486651] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.747 ms, result 0 00:19:50.112 true 00:19:50.369 03:07:44 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:50.369 [2024-12-10 03:07:44.678305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.369 [2024-12-10 03:07:44.678350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:50.369 [2024-12-10 03:07:44.678362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.178 ms 00:19:50.369 [2024-12-10 03:07:44.678370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.369 [2024-12-10 03:07:44.678413] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.289 ms, result 0 00:19:50.369 true 00:19:50.369 03:07:44 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76690 00:19:50.369 03:07:44 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76690 ']' 00:19:50.369 03:07:44 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76690 00:19:50.370 03:07:44 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:50.370 03:07:44 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.370 03:07:44 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76690 00:19:50.370 killing process with pid 76690 00:19:50.370 03:07:44 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:50.370 03:07:44 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:50.370 03:07:44 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76690' 00:19:50.370 03:07:44 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76690 00:19:50.370 03:07:44 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76690 00:19:51.304 [2024-12-10 03:07:45.396662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.304 [2024-12-10 03:07:45.396712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:51.304 [2024-12-10 03:07:45.396723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:51.304 [2024-12-10 03:07:45.396730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.304 [2024-12-10 03:07:45.396750] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:51.304 [2024-12-10 03:07:45.398889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.304 [2024-12-10 03:07:45.398914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:51.304 [2024-12-10 03:07:45.398926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.125 ms 00:19:51.304 [2024-12-10 03:07:45.398933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.304 [2024-12-10 03:07:45.399156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.304 [2024-12-10 03:07:45.399164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:51.304 [2024-12-10 03:07:45.399172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:19:51.304 [2024-12-10 03:07:45.399177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.304 [2024-12-10 03:07:45.402409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.304 [2024-12-10 03:07:45.402436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:51.304 [2024-12-10 03:07:45.402448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.215 ms 00:19:51.304 [2024-12-10 03:07:45.402454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.304 [2024-12-10 03:07:45.407666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.304 [2024-12-10 03:07:45.407693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:51.304 [2024-12-10 03:07:45.407704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.181 ms 00:19:51.304 [2024-12-10 03:07:45.407710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.304 [2024-12-10 03:07:45.415324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.304 [2024-12-10 03:07:45.415355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:51.304 [2024-12-10 03:07:45.415365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.572 ms 00:19:51.304 [2024-12-10 03:07:45.415371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.304 [2024-12-10 03:07:45.422176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.304 [2024-12-10 03:07:45.422208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:51.304 [2024-12-10 03:07:45.422218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.765 ms 00:19:51.304 [2024-12-10 03:07:45.422225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.304 [2024-12-10 03:07:45.422337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.304 [2024-12-10 03:07:45.422346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:51.304 [2024-12-10 03:07:45.422354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:51.304 [2024-12-10 03:07:45.422359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.304 [2024-12-10 03:07:45.430216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.304 [2024-12-10 03:07:45.430243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:51.304 [2024-12-10 03:07:45.430252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.839 ms 00:19:51.304 [2024-12-10 03:07:45.430258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.304 [2024-12-10 03:07:45.437812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.304 [2024-12-10 03:07:45.437840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:51.304 [2024-12-10 03:07:45.437853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.524 ms 00:19:51.304 [2024-12-10 03:07:45.437859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.304 [2024-12-10 03:07:45.444762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.304 [2024-12-10 03:07:45.444788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:51.304 [2024-12-10 03:07:45.444796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.872 ms 00:19:51.304 [2024-12-10 03:07:45.444802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.304 [2024-12-10 03:07:45.451930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.304 [2024-12-10 03:07:45.451958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:51.304 [2024-12-10 03:07:45.451966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.075 ms 00:19:51.304 [2024-12-10 03:07:45.451972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.304 [2024-12-10 03:07:45.452001] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:51.304 [2024-12-10 03:07:45.452013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:51.304 [2024-12-10 03:07:45.452184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:51.305 [2024-12-10 03:07:45.452708] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:51.305 [2024-12-10 03:07:45.452719] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa9db8c7-13a0-427a-b0c5-b6bfa89fe0fd 00:19:51.305 [2024-12-10 03:07:45.452727] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:51.305 [2024-12-10 03:07:45.452735] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:51.305 [2024-12-10 03:07:45.452740] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:51.305 [2024-12-10 03:07:45.452748] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:51.305 [2024-12-10 03:07:45.452754] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:51.305 [2024-12-10 03:07:45.452761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:51.305 [2024-12-10 03:07:45.452767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:51.305 [2024-12-10 03:07:45.452774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:51.305 [2024-12-10 03:07:45.452779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:51.305 [2024-12-10 03:07:45.452786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.305 [2024-12-10 03:07:45.452792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:51.305 [2024-12-10 03:07:45.452799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:19:51.305 [2024-12-10 03:07:45.452805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.305 [2024-12-10 03:07:45.462637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.305 [2024-12-10 03:07:45.462662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:51.305 [2024-12-10 03:07:45.462673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.813 ms 00:19:51.305 [2024-12-10 03:07:45.462680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.305 [2024-12-10 03:07:45.462975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.305 [2024-12-10 03:07:45.462988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:51.305 [2024-12-10 03:07:45.462998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:19:51.305 [2024-12-10 03:07:45.463004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.306 [2024-12-10 03:07:45.498317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.306 [2024-12-10 03:07:45.498346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.306 [2024-12-10 03:07:45.498355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.306 [2024-12-10 03:07:45.498361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.306 [2024-12-10 03:07:45.498451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.306 [2024-12-10 03:07:45.498459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.306 [2024-12-10 03:07:45.498469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.306 [2024-12-10 03:07:45.498475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.306 [2024-12-10 03:07:45.498509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.306 [2024-12-10 03:07:45.498516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.306 [2024-12-10 03:07:45.498525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.306 [2024-12-10 03:07:45.498531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.306 [2024-12-10 03:07:45.498546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.306 [2024-12-10 03:07:45.498553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.306 [2024-12-10 03:07:45.498559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.306 [2024-12-10 03:07:45.498566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.306 [2024-12-10 03:07:45.558534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.306 [2024-12-10 03:07:45.558571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.306 [2024-12-10 03:07:45.558582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.306 [2024-12-10 03:07:45.558588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.306 [2024-12-10 03:07:45.608318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.306 [2024-12-10 03:07:45.608357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.306 [2024-12-10 03:07:45.608369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.306 [2024-12-10 03:07:45.608390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.306 [2024-12-10 03:07:45.609400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.306 [2024-12-10 03:07:45.609419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.306 [2024-12-10 03:07:45.609431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.306 [2024-12-10 03:07:45.609437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.306 [2024-12-10 03:07:45.609466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.306 [2024-12-10 03:07:45.609472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.306 [2024-12-10 03:07:45.609480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.306 [2024-12-10 03:07:45.609486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.306 [2024-12-10 03:07:45.609566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.306 [2024-12-10 03:07:45.609574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.306 [2024-12-10 03:07:45.609582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.306 [2024-12-10 03:07:45.609588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.306 [2024-12-10 03:07:45.609615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.306 [2024-12-10 03:07:45.609622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:51.306 [2024-12-10 03:07:45.609630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.306 [2024-12-10 03:07:45.609636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.306 [2024-12-10 03:07:45.609669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.306 [2024-12-10 03:07:45.609676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.306 [2024-12-10 03:07:45.609685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.306 [2024-12-10 03:07:45.609691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.306 [2024-12-10 03:07:45.609726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.306 [2024-12-10 03:07:45.609734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.306 [2024-12-10 03:07:45.609742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.306 [2024-12-10 03:07:45.609747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.306 [2024-12-10 03:07:45.609857] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 213.175 ms, result 0 00:19:51.872 03:07:46 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:51.872 03:07:46 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:51.872 [2024-12-10 03:07:46.175741] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:51.872 [2024-12-10 03:07:46.175832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76751 ] 00:19:52.130 [2024-12-10 03:07:46.325414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.130 [2024-12-10 03:07:46.403490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.388 [2024-12-10 03:07:46.613673] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.388 [2024-12-10 03:07:46.613731] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.388 [2024-12-10 03:07:46.761581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.388 [2024-12-10 03:07:46.761621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:52.388 [2024-12-10 03:07:46.761631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:52.388 [2024-12-10 03:07:46.761638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.388 [2024-12-10 03:07:46.763673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.388 [2024-12-10 03:07:46.763704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:52.388 [2024-12-10 03:07:46.763712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.023 ms 00:19:52.388 [2024-12-10 03:07:46.763718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.388 [2024-12-10 03:07:46.763773] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:52.388 [2024-12-10 03:07:46.764281] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:52.388 [2024-12-10 03:07:46.764298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.388 [2024-12-10 03:07:46.764304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:52.388 [2024-12-10 03:07:46.764311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:19:52.388 [2024-12-10 03:07:46.764316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.388 [2024-12-10 03:07:46.765289] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:52.647 [2024-12-10 03:07:46.774821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.647 [2024-12-10 03:07:46.774852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:52.647 [2024-12-10 03:07:46.774861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.533 ms 00:19:52.647 [2024-12-10 03:07:46.774867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.647 [2024-12-10 03:07:46.774931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.647 [2024-12-10 03:07:46.774940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:52.647 [2024-12-10 03:07:46.774947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:52.647 [2024-12-10 03:07:46.774952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.647 [2024-12-10 03:07:46.779311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.647 [2024-12-10 03:07:46.779339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.647 [2024-12-10 03:07:46.779346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.330 ms 00:19:52.647 [2024-12-10 03:07:46.779352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.647 [2024-12-10 03:07:46.779433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.647 [2024-12-10 03:07:46.779441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.647 [2024-12-10 03:07:46.779447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:52.647 [2024-12-10 03:07:46.779453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.647 [2024-12-10 03:07:46.779474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.647 [2024-12-10 03:07:46.779480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:52.647 [2024-12-10 03:07:46.779485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:52.647 [2024-12-10 03:07:46.779491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.647 [2024-12-10 03:07:46.779506] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:52.647 [2024-12-10 03:07:46.782199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.647 [2024-12-10 03:07:46.782222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.647 [2024-12-10 03:07:46.782229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.696 ms 00:19:52.647 [2024-12-10 03:07:46.782235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.647 [2024-12-10 03:07:46.782263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.647 [2024-12-10 03:07:46.782271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:52.647 [2024-12-10 03:07:46.782277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:52.647 [2024-12-10 03:07:46.782282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.647 [2024-12-10 03:07:46.782297] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:52.647 [2024-12-10 03:07:46.782312] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:52.647 [2024-12-10 03:07:46.782340] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:52.647 [2024-12-10 03:07:46.782351] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:52.647 [2024-12-10 03:07:46.782438] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:52.647 [2024-12-10 03:07:46.782447] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:52.647 [2024-12-10 03:07:46.782455] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:52.647 [2024-12-10 03:07:46.782465] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:52.648 [2024-12-10 03:07:46.782471] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:52.648 [2024-12-10 03:07:46.782477] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:52.648 [2024-12-10 03:07:46.782483] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:52.648 [2024-12-10 03:07:46.782489] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:52.648 [2024-12-10 03:07:46.782494] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:52.648 [2024-12-10 03:07:46.782500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.648 [2024-12-10 03:07:46.782505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:52.648 [2024-12-10 03:07:46.782511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:19:52.648 [2024-12-10 03:07:46.782516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.648 [2024-12-10 03:07:46.782583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.648 [2024-12-10 03:07:46.782592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:52.648 [2024-12-10 03:07:46.782597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:52.648 [2024-12-10 03:07:46.782603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.648 [2024-12-10 03:07:46.782677] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:52.648 [2024-12-10 03:07:46.782684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:52.648 [2024-12-10 03:07:46.782690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.648 [2024-12-10 03:07:46.782696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.648 [2024-12-10 03:07:46.782702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:52.648 [2024-12-10 03:07:46.782707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:52.648 [2024-12-10 03:07:46.782712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:52.648 [2024-12-10 03:07:46.782716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:52.648 [2024-12-10 03:07:46.782721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:52.648 [2024-12-10 03:07:46.782726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.648 [2024-12-10 03:07:46.782731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:52.648 [2024-12-10 03:07:46.782741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:52.648 [2024-12-10 03:07:46.782746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.648 [2024-12-10 03:07:46.782752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:52.648 [2024-12-10 03:07:46.782758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:52.648 [2024-12-10 03:07:46.782762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.648 [2024-12-10 03:07:46.782767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:52.648 [2024-12-10 03:07:46.782772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:52.648 [2024-12-10 03:07:46.782777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.648 [2024-12-10 03:07:46.782782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:52.648 [2024-12-10 03:07:46.782788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:52.648 [2024-12-10 03:07:46.782792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.648 [2024-12-10 03:07:46.782797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:52.648 [2024-12-10 03:07:46.782803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:52.648 [2024-12-10 03:07:46.782807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.648 [2024-12-10 03:07:46.782812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:52.648 [2024-12-10 03:07:46.782817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:52.648 [2024-12-10 03:07:46.782822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.648 [2024-12-10 03:07:46.782826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:52.648 [2024-12-10 03:07:46.782832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:52.648 [2024-12-10 03:07:46.782836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.648 [2024-12-10 03:07:46.782841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:52.648 [2024-12-10 03:07:46.782846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:52.648 [2024-12-10 03:07:46.782851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.648 [2024-12-10 03:07:46.782856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:52.648 [2024-12-10 03:07:46.782861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:52.648 [2024-12-10 03:07:46.782866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.648 [2024-12-10 03:07:46.782871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:52.648 [2024-12-10 03:07:46.782875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:52.648 [2024-12-10 03:07:46.782880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.648 [2024-12-10 03:07:46.782885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:52.648 [2024-12-10 03:07:46.782890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:52.648 [2024-12-10 03:07:46.782895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.648 [2024-12-10 03:07:46.782899] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:52.648 [2024-12-10 03:07:46.782905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:52.648 [2024-12-10 03:07:46.782913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.648 [2024-12-10 03:07:46.782919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.648 [2024-12-10 03:07:46.782924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:52.648 [2024-12-10 03:07:46.782929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:52.648 [2024-12-10 03:07:46.782934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:52.648 [2024-12-10 03:07:46.782939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:52.648 [2024-12-10 03:07:46.782944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:52.648 [2024-12-10 03:07:46.782949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:52.648 [2024-12-10 03:07:46.782955] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:52.648 [2024-12-10 03:07:46.782961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.648 [2024-12-10 03:07:46.782967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:52.648 [2024-12-10 03:07:46.782973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:52.648 [2024-12-10 03:07:46.782978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:52.648 [2024-12-10 03:07:46.782984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:52.648 [2024-12-10 03:07:46.782989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:52.648 [2024-12-10 03:07:46.782994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:52.648 [2024-12-10 03:07:46.782999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:52.648 [2024-12-10 03:07:46.783004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:52.648 [2024-12-10 03:07:46.783009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:52.648 [2024-12-10 03:07:46.783014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:52.648 [2024-12-10 03:07:46.783019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:52.648 [2024-12-10 03:07:46.783025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:52.648 [2024-12-10 03:07:46.783030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:52.648 [2024-12-10 03:07:46.783035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:52.648 [2024-12-10 03:07:46.783040] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:52.648 [2024-12-10 03:07:46.783046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.648 [2024-12-10 03:07:46.783052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:52.648 [2024-12-10 03:07:46.783057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:52.648 [2024-12-10 03:07:46.783062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:52.648 [2024-12-10 03:07:46.783068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:52.648 [2024-12-10 03:07:46.783073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.648 [2024-12-10 03:07:46.783081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:52.648 [2024-12-10 03:07:46.783087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:19:52.648 [2024-12-10 03:07:46.783093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.648 [2024-12-10 03:07:46.803845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.648 [2024-12-10 03:07:46.803876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:52.648 [2024-12-10 03:07:46.803883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.712 ms 00:19:52.648 [2024-12-10 03:07:46.803889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.648 [2024-12-10 03:07:46.803996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.648 [2024-12-10 03:07:46.804005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:52.648 [2024-12-10 03:07:46.804012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:52.648 [2024-12-10 03:07:46.804018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.648 [2024-12-10 03:07:46.843127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.648 [2024-12-10 03:07:46.843163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:52.649 [2024-12-10 03:07:46.843175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.093 ms 00:19:52.649 [2024-12-10 03:07:46.843182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.843240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.843249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.649 [2024-12-10 03:07:46.843256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:52.649 [2024-12-10 03:07:46.843261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.843565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.843577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.649 [2024-12-10 03:07:46.843584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:19:52.649 [2024-12-10 03:07:46.843593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.843696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.843704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.649 [2024-12-10 03:07:46.843710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:52.649 [2024-12-10 03:07:46.843716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.854458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.854484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.649 [2024-12-10 03:07:46.854492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.727 ms 00:19:52.649 [2024-12-10 03:07:46.854497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.864338] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:52.649 [2024-12-10 03:07:46.864368] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:52.649 [2024-12-10 03:07:46.864385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.864391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:52.649 [2024-12-10 03:07:46.864398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.800 ms 00:19:52.649 [2024-12-10 03:07:46.864403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.883011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.883044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:52.649 [2024-12-10 03:07:46.883052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.562 ms 00:19:52.649 [2024-12-10 03:07:46.883059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.891869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.891904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:52.649 [2024-12-10 03:07:46.891912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.755 ms 00:19:52.649 [2024-12-10 03:07:46.891918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.900519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.900546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:52.649 [2024-12-10 03:07:46.900554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.559 ms 00:19:52.649 [2024-12-10 03:07:46.900559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.901024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.901045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:52.649 [2024-12-10 03:07:46.901052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:19:52.649 [2024-12-10 03:07:46.901058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.945333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.945393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:52.649 [2024-12-10 03:07:46.945403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.258 ms 00:19:52.649 [2024-12-10 03:07:46.945409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.953453] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:52.649 [2024-12-10 03:07:46.964793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.964825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:52.649 [2024-12-10 03:07:46.964834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.323 ms 00:19:52.649 [2024-12-10 03:07:46.964844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.964914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.964922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:52.649 [2024-12-10 03:07:46.964929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:52.649 [2024-12-10 03:07:46.964935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.964970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.964976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:52.649 [2024-12-10 03:07:46.964982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:52.649 [2024-12-10 03:07:46.964991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.965016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.965023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:52.649 [2024-12-10 03:07:46.965029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:52.649 [2024-12-10 03:07:46.965035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.965058] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:52.649 [2024-12-10 03:07:46.965066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.965072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:52.649 [2024-12-10 03:07:46.965078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:52.649 [2024-12-10 03:07:46.965083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.982766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.982797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:52.649 [2024-12-10 03:07:46.982805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.668 ms 00:19:52.649 [2024-12-10 03:07:46.982812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.982879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.649 [2024-12-10 03:07:46.982887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:52.649 [2024-12-10 03:07:46.982893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:52.649 [2024-12-10 03:07:46.982899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.649 [2024-12-10 03:07:46.983528] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:52.649 [2024-12-10 03:07:46.985796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.710 ms, result 0 00:19:52.649 [2024-12-10 03:07:46.986356] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:52.649 [2024-12-10 03:07:47.001083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:54.030  [2024-12-10T03:07:49.360Z] Copying: 23/256 [MB] (23 MBps) [2024-12-10T03:07:50.309Z] Copying: 42/256 [MB] (18 MBps) [2024-12-10T03:07:51.246Z] Copying: 60/256 [MB] (18 MBps) [2024-12-10T03:07:52.188Z] Copying: 77/256 [MB] (16 MBps) [2024-12-10T03:07:53.130Z] Copying: 92/256 [MB] (15 MBps) [2024-12-10T03:07:54.068Z] Copying: 119/256 [MB] (26 MBps) [2024-12-10T03:07:55.450Z] Copying: 130/256 [MB] (11 MBps) [2024-12-10T03:07:56.022Z] Copying: 142/256 [MB] (11 MBps) [2024-12-10T03:07:57.408Z] Copying: 158/256 [MB] (16 MBps) [2024-12-10T03:07:58.350Z] Copying: 174/256 [MB] (15 MBps) [2024-12-10T03:07:59.392Z] Copying: 190/256 [MB] (16 MBps) [2024-12-10T03:08:00.326Z] Copying: 204/256 [MB] (14 MBps) [2024-12-10T03:08:01.264Z] Copying: 225/256 [MB] (20 MBps) [2024-12-10T03:08:02.207Z] Copying: 241/256 [MB] (16 MBps) [2024-12-10T03:08:02.469Z] Copying: 252/256 [MB] (10 MBps) [2024-12-10T03:08:02.469Z] Copying: 256/256 [MB] (average 16 MBps)[2024-12-10 03:08:02.366425] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:08.081 [2024-12-10 03:08:02.376624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.081 [2024-12-10 03:08:02.376678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:08.081 [2024-12-10 03:08:02.376701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:08.081 [2024-12-10 03:08:02.376711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.081 [2024-12-10 03:08:02.376736] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:08.081 [2024-12-10 03:08:02.379682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.081 [2024-12-10 03:08:02.379719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:08.081 [2024-12-10 03:08:02.379731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.931 ms 00:20:08.081 [2024-12-10 03:08:02.379740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.081 [2024-12-10 03:08:02.380020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.081 [2024-12-10 03:08:02.380032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:08.081 [2024-12-10 03:08:02.380041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:20:08.081 [2024-12-10 03:08:02.380049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.081 [2024-12-10 03:08:02.383743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.081 [2024-12-10 03:08:02.383767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:08.081 [2024-12-10 03:08:02.383777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.673 ms 00:20:08.081 [2024-12-10 03:08:02.383785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.081 [2024-12-10 03:08:02.390680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.081 [2024-12-10 03:08:02.390719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:08.081 [2024-12-10 03:08:02.390730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.877 ms 00:20:08.081 [2024-12-10 03:08:02.390737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.081 [2024-12-10 03:08:02.416793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.081 [2024-12-10 03:08:02.416843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:08.081 [2024-12-10 03:08:02.416856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.992 ms 00:20:08.081 [2024-12-10 03:08:02.416863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.081 [2024-12-10 03:08:02.433079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.081 [2024-12-10 03:08:02.433128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:08.081 [2024-12-10 03:08:02.433147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.154 ms 00:20:08.081 [2024-12-10 03:08:02.433155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.081 [2024-12-10 03:08:02.433308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.081 [2024-12-10 03:08:02.433321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:08.081 [2024-12-10 03:08:02.433339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:08.081 [2024-12-10 03:08:02.433346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.081 [2024-12-10 03:08:02.459257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.081 [2024-12-10 03:08:02.459304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:08.081 [2024-12-10 03:08:02.459314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.894 ms 00:20:08.081 [2024-12-10 03:08:02.459322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.344 [2024-12-10 03:08:02.484760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.344 [2024-12-10 03:08:02.484806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:08.344 [2024-12-10 03:08:02.484817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.391 ms 00:20:08.344 [2024-12-10 03:08:02.484824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.344 [2024-12-10 03:08:02.509898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.344 [2024-12-10 03:08:02.509946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:08.344 [2024-12-10 03:08:02.509956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.026 ms 00:20:08.344 [2024-12-10 03:08:02.509963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.344 [2024-12-10 03:08:02.534693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.344 [2024-12-10 03:08:02.534739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:08.344 [2024-12-10 03:08:02.534750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.643 ms 00:20:08.344 [2024-12-10 03:08:02.534757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.344 [2024-12-10 03:08:02.534803] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:08.344 [2024-12-10 03:08:02.534819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.534997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:08.344 [2024-12-10 03:08:02.535397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:08.345 [2024-12-10 03:08:02.535615] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:08.345 [2024-12-10 03:08:02.535623] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa9db8c7-13a0-427a-b0c5-b6bfa89fe0fd 00:20:08.345 [2024-12-10 03:08:02.535632] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:08.345 [2024-12-10 03:08:02.535641] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:08.345 [2024-12-10 03:08:02.535648] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:08.345 [2024-12-10 03:08:02.535657] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:08.345 [2024-12-10 03:08:02.535664] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:08.345 [2024-12-10 03:08:02.535671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:08.345 [2024-12-10 03:08:02.535682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:08.345 [2024-12-10 03:08:02.535689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:08.345 [2024-12-10 03:08:02.535696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:08.345 [2024-12-10 03:08:02.535704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.345 [2024-12-10 03:08:02.535712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:08.345 [2024-12-10 03:08:02.535721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:20:08.345 [2024-12-10 03:08:02.535728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.345 [2024-12-10 03:08:02.549259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.345 [2024-12-10 03:08:02.549311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:08.345 [2024-12-10 03:08:02.549322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.499 ms 00:20:08.345 [2024-12-10 03:08:02.549330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.345 [2024-12-10 03:08:02.549746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.345 [2024-12-10 03:08:02.549770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:08.345 [2024-12-10 03:08:02.549781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:20:08.345 [2024-12-10 03:08:02.549789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.345 [2024-12-10 03:08:02.588727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.345 [2024-12-10 03:08:02.588780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:08.345 [2024-12-10 03:08:02.588792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.345 [2024-12-10 03:08:02.588807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.345 [2024-12-10 03:08:02.588911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.345 [2024-12-10 03:08:02.588922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:08.345 [2024-12-10 03:08:02.588930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.345 [2024-12-10 03:08:02.588939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.345 [2024-12-10 03:08:02.588989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.345 [2024-12-10 03:08:02.588999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:08.345 [2024-12-10 03:08:02.589007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.345 [2024-12-10 03:08:02.589015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.345 [2024-12-10 03:08:02.589035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.345 [2024-12-10 03:08:02.589044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:08.345 [2024-12-10 03:08:02.589052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.345 [2024-12-10 03:08:02.589059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.345 [2024-12-10 03:08:02.674090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.345 [2024-12-10 03:08:02.674141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:08.345 [2024-12-10 03:08:02.674156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.345 [2024-12-10 03:08:02.674165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.606 [2024-12-10 03:08:02.743220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.606 [2024-12-10 03:08:02.743280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:08.606 [2024-12-10 03:08:02.743292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.606 [2024-12-10 03:08:02.743300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.606 [2024-12-10 03:08:02.743398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.606 [2024-12-10 03:08:02.743409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:08.606 [2024-12-10 03:08:02.743418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.606 [2024-12-10 03:08:02.743427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.606 [2024-12-10 03:08:02.743460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.606 [2024-12-10 03:08:02.743475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:08.606 [2024-12-10 03:08:02.743485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.606 [2024-12-10 03:08:02.743492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.606 [2024-12-10 03:08:02.743588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.606 [2024-12-10 03:08:02.743600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:08.606 [2024-12-10 03:08:02.743609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.606 [2024-12-10 03:08:02.743617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.606 [2024-12-10 03:08:02.743655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.606 [2024-12-10 03:08:02.743665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:08.606 [2024-12-10 03:08:02.743678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.606 [2024-12-10 03:08:02.743687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.606 [2024-12-10 03:08:02.743732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.606 [2024-12-10 03:08:02.743741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:08.607 [2024-12-10 03:08:02.743750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.607 [2024-12-10 03:08:02.743759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.607 [2024-12-10 03:08:02.743807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.607 [2024-12-10 03:08:02.743821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:08.607 [2024-12-10 03:08:02.743830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.607 [2024-12-10 03:08:02.743838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.607 [2024-12-10 03:08:02.744009] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.375 ms, result 0 00:20:09.177 00:20:09.178 00:20:09.178 03:08:03 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:09.178 03:08:03 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:09.746 03:08:04 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:10.007 [2024-12-10 03:08:04.139611] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:10.007 [2024-12-10 03:08:04.139760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76945 ] 00:20:10.007 [2024-12-10 03:08:04.301461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.269 [2024-12-10 03:08:04.424270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.531 [2024-12-10 03:08:04.719491] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:10.531 [2024-12-10 03:08:04.719569] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:10.531 [2024-12-10 03:08:04.881013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.531 [2024-12-10 03:08:04.881074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:10.531 [2024-12-10 03:08:04.881089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:10.531 [2024-12-10 03:08:04.881098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.531 [2024-12-10 03:08:04.884061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.531 [2024-12-10 03:08:04.884110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:10.531 [2024-12-10 03:08:04.884121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.942 ms 00:20:10.531 [2024-12-10 03:08:04.884130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.531 [2024-12-10 03:08:04.884243] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:10.531 [2024-12-10 03:08:04.885543] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:10.531 [2024-12-10 03:08:04.885595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.531 [2024-12-10 03:08:04.885607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:10.531 [2024-12-10 03:08:04.885617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:20:10.531 [2024-12-10 03:08:04.885626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.531 [2024-12-10 03:08:04.887335] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:10.531 [2024-12-10 03:08:04.901458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.531 [2024-12-10 03:08:04.901511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:10.531 [2024-12-10 03:08:04.901524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.126 ms 00:20:10.531 [2024-12-10 03:08:04.901533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.531 [2024-12-10 03:08:04.901651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.531 [2024-12-10 03:08:04.901663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:10.531 [2024-12-10 03:08:04.901673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:10.531 [2024-12-10 03:08:04.901681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.531 [2024-12-10 03:08:04.909464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.531 [2024-12-10 03:08:04.909506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:10.531 [2024-12-10 03:08:04.909516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.737 ms 00:20:10.531 [2024-12-10 03:08:04.909523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.531 [2024-12-10 03:08:04.909626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.531 [2024-12-10 03:08:04.909637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:10.531 [2024-12-10 03:08:04.909646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:10.531 [2024-12-10 03:08:04.909655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.531 [2024-12-10 03:08:04.909685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.531 [2024-12-10 03:08:04.909695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:10.531 [2024-12-10 03:08:04.909703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:10.531 [2024-12-10 03:08:04.909712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.531 [2024-12-10 03:08:04.909735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:10.795 [2024-12-10 03:08:04.913624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.795 [2024-12-10 03:08:04.913662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:10.795 [2024-12-10 03:08:04.913672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.896 ms 00:20:10.795 [2024-12-10 03:08:04.913680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.795 [2024-12-10 03:08:04.913757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.795 [2024-12-10 03:08:04.913768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:10.795 [2024-12-10 03:08:04.913778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:10.795 [2024-12-10 03:08:04.913786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.795 [2024-12-10 03:08:04.913811] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:10.795 [2024-12-10 03:08:04.913833] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:10.795 [2024-12-10 03:08:04.913869] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:10.795 [2024-12-10 03:08:04.913885] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:10.795 [2024-12-10 03:08:04.913991] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:10.795 [2024-12-10 03:08:04.914002] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:10.795 [2024-12-10 03:08:04.914013] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:10.795 [2024-12-10 03:08:04.914026] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:10.795 [2024-12-10 03:08:04.914035] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:10.795 [2024-12-10 03:08:04.914045] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:10.795 [2024-12-10 03:08:04.914053] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:10.795 [2024-12-10 03:08:04.914061] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:10.795 [2024-12-10 03:08:04.914069] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:10.795 [2024-12-10 03:08:04.914078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.795 [2024-12-10 03:08:04.914086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:10.795 [2024-12-10 03:08:04.914094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:20:10.795 [2024-12-10 03:08:04.914101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.795 [2024-12-10 03:08:04.914189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.795 [2024-12-10 03:08:04.914201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:10.795 [2024-12-10 03:08:04.914209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:10.795 [2024-12-10 03:08:04.914216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.795 [2024-12-10 03:08:04.914321] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:10.795 [2024-12-10 03:08:04.914332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:10.795 [2024-12-10 03:08:04.914340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:10.795 [2024-12-10 03:08:04.914348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.795 [2024-12-10 03:08:04.914356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:10.795 [2024-12-10 03:08:04.914364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:10.795 [2024-12-10 03:08:04.914370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:10.795 [2024-12-10 03:08:04.914393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:10.795 [2024-12-10 03:08:04.914400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:10.795 [2024-12-10 03:08:04.914407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:10.795 [2024-12-10 03:08:04.914414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:10.795 [2024-12-10 03:08:04.914428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:10.795 [2024-12-10 03:08:04.914435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:10.795 [2024-12-10 03:08:04.914446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:10.795 [2024-12-10 03:08:04.914453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:10.795 [2024-12-10 03:08:04.914460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.795 [2024-12-10 03:08:04.914467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:10.795 [2024-12-10 03:08:04.914473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:10.795 [2024-12-10 03:08:04.914479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.795 [2024-12-10 03:08:04.914486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:10.795 [2024-12-10 03:08:04.914493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:10.795 [2024-12-10 03:08:04.914500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.795 [2024-12-10 03:08:04.914507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:10.795 [2024-12-10 03:08:04.914513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:10.795 [2024-12-10 03:08:04.914520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.795 [2024-12-10 03:08:04.914527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:10.795 [2024-12-10 03:08:04.914534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:10.795 [2024-12-10 03:08:04.914540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.795 [2024-12-10 03:08:04.914547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:10.795 [2024-12-10 03:08:04.914555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:10.795 [2024-12-10 03:08:04.914562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.795 [2024-12-10 03:08:04.914570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:10.795 [2024-12-10 03:08:04.914577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:10.795 [2024-12-10 03:08:04.914583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:10.795 [2024-12-10 03:08:04.914590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:10.795 [2024-12-10 03:08:04.914597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:10.795 [2024-12-10 03:08:04.914604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:10.795 [2024-12-10 03:08:04.914610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:10.795 [2024-12-10 03:08:04.914616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:10.795 [2024-12-10 03:08:04.914624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.795 [2024-12-10 03:08:04.914631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:10.795 [2024-12-10 03:08:04.914638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:10.795 [2024-12-10 03:08:04.914645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.795 [2024-12-10 03:08:04.914651] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:10.795 [2024-12-10 03:08:04.914659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:10.795 [2024-12-10 03:08:04.914671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:10.795 [2024-12-10 03:08:04.914679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.795 [2024-12-10 03:08:04.914687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:10.795 [2024-12-10 03:08:04.914694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:10.795 [2024-12-10 03:08:04.914700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:10.795 [2024-12-10 03:08:04.914707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:10.795 [2024-12-10 03:08:04.914714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:10.795 [2024-12-10 03:08:04.914721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:10.795 [2024-12-10 03:08:04.914729] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:10.795 [2024-12-10 03:08:04.914738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:10.795 [2024-12-10 03:08:04.914746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:10.795 [2024-12-10 03:08:04.914755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:10.795 [2024-12-10 03:08:04.914762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:10.795 [2024-12-10 03:08:04.914769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:10.795 [2024-12-10 03:08:04.914775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:10.795 [2024-12-10 03:08:04.914783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:10.796 [2024-12-10 03:08:04.914790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:10.796 [2024-12-10 03:08:04.914796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:10.796 [2024-12-10 03:08:04.914803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:10.796 [2024-12-10 03:08:04.914811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:10.796 [2024-12-10 03:08:04.914818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:10.796 [2024-12-10 03:08:04.914825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:10.796 [2024-12-10 03:08:04.914831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:10.796 [2024-12-10 03:08:04.914838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:10.796 [2024-12-10 03:08:04.914845] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:10.796 [2024-12-10 03:08:04.914853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:10.796 [2024-12-10 03:08:04.914862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:10.796 [2024-12-10 03:08:04.914869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:10.796 [2024-12-10 03:08:04.914876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:10.796 [2024-12-10 03:08:04.914884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:10.796 [2024-12-10 03:08:04.914891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:04.914901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:10.796 [2024-12-10 03:08:04.914912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:20:10.796 [2024-12-10 03:08:04.914919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:04.946509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:04.946560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:10.796 [2024-12-10 03:08:04.946570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.532 ms 00:20:10.796 [2024-12-10 03:08:04.946579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:04.946711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:04.946722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:10.796 [2024-12-10 03:08:04.946731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:10.796 [2024-12-10 03:08:04.946740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:04.992083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:04.992141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:10.796 [2024-12-10 03:08:04.992157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.319 ms 00:20:10.796 [2024-12-10 03:08:04.992166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:04.992275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:04.992288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:10.796 [2024-12-10 03:08:04.992298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:10.796 [2024-12-10 03:08:04.992306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:04.992864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:04.992905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:10.796 [2024-12-10 03:08:04.992925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:20:10.796 [2024-12-10 03:08:04.992934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:04.993086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:04.993097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:10.796 [2024-12-10 03:08:04.993106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:10.796 [2024-12-10 03:08:04.993115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:05.009003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:05.009050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:10.796 [2024-12-10 03:08:05.009060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.865 ms 00:20:10.796 [2024-12-10 03:08:05.009068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:05.023272] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:10.796 [2024-12-10 03:08:05.023321] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:10.796 [2024-12-10 03:08:05.023335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:05.023343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:10.796 [2024-12-10 03:08:05.023353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.156 ms 00:20:10.796 [2024-12-10 03:08:05.023360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:05.049518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:05.049581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:10.796 [2024-12-10 03:08:05.049593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.054 ms 00:20:10.796 [2024-12-10 03:08:05.049601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:05.062900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:05.062945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:10.796 [2024-12-10 03:08:05.062957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.207 ms 00:20:10.796 [2024-12-10 03:08:05.062964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:05.075634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:05.075683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:10.796 [2024-12-10 03:08:05.075695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.585 ms 00:20:10.796 [2024-12-10 03:08:05.075701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:05.076370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:05.076418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:10.796 [2024-12-10 03:08:05.076429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:20:10.796 [2024-12-10 03:08:05.076437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:05.142091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:05.142158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:10.796 [2024-12-10 03:08:05.142173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.626 ms 00:20:10.796 [2024-12-10 03:08:05.142182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:05.153283] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:10.796 [2024-12-10 03:08:05.172216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:05.172271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:10.796 [2024-12-10 03:08:05.172283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.927 ms 00:20:10.796 [2024-12-10 03:08:05.172297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:05.172418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:05.172431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:10.796 [2024-12-10 03:08:05.172443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:10.796 [2024-12-10 03:08:05.172452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:05.172508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:05.172519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:10.796 [2024-12-10 03:08:05.172528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:10.796 [2024-12-10 03:08:05.172540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:05.172570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:05.172579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:10.796 [2024-12-10 03:08:05.172588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:10.796 [2024-12-10 03:08:05.172596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.796 [2024-12-10 03:08:05.172634] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:10.796 [2024-12-10 03:08:05.172645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.796 [2024-12-10 03:08:05.172654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:10.796 [2024-12-10 03:08:05.172662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:10.796 [2024-12-10 03:08:05.172670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.057 [2024-12-10 03:08:05.199043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.057 [2024-12-10 03:08:05.199095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:11.057 [2024-12-10 03:08:05.199108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.353 ms 00:20:11.057 [2024-12-10 03:08:05.199116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.057 [2024-12-10 03:08:05.199264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.057 [2024-12-10 03:08:05.199278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:11.057 [2024-12-10 03:08:05.199288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:11.057 [2024-12-10 03:08:05.199297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.057 [2024-12-10 03:08:05.200411] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:11.057 [2024-12-10 03:08:05.203793] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 319.049 ms, result 0 00:20:11.057 [2024-12-10 03:08:05.205209] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:11.057 [2024-12-10 03:08:05.218700] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:11.319  [2024-12-10T03:08:05.707Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-12-10 03:08:05.609173] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:11.319 [2024-12-10 03:08:05.618323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.319 [2024-12-10 03:08:05.618372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:11.319 [2024-12-10 03:08:05.618413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:11.319 [2024-12-10 03:08:05.618422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.319 [2024-12-10 03:08:05.618445] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:11.319 [2024-12-10 03:08:05.621401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.319 [2024-12-10 03:08:05.621442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:11.319 [2024-12-10 03:08:05.621454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.942 ms 00:20:11.319 [2024-12-10 03:08:05.621463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.319 [2024-12-10 03:08:05.624710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.319 [2024-12-10 03:08:05.624756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:11.319 [2024-12-10 03:08:05.624767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.219 ms 00:20:11.319 [2024-12-10 03:08:05.624775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.319 [2024-12-10 03:08:05.629299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.319 [2024-12-10 03:08:05.629342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:11.319 [2024-12-10 03:08:05.629353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.503 ms 00:20:11.319 [2024-12-10 03:08:05.629361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.319 [2024-12-10 03:08:05.636269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.319 [2024-12-10 03:08:05.636312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:11.319 [2024-12-10 03:08:05.636323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.866 ms 00:20:11.319 [2024-12-10 03:08:05.636332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.319 [2024-12-10 03:08:05.661322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.319 [2024-12-10 03:08:05.661385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:11.319 [2024-12-10 03:08:05.661398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.928 ms 00:20:11.319 [2024-12-10 03:08:05.661405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.319 [2024-12-10 03:08:05.677863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.319 [2024-12-10 03:08:05.677915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:11.319 [2024-12-10 03:08:05.677928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.410 ms 00:20:11.319 [2024-12-10 03:08:05.677936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.319 [2024-12-10 03:08:05.678090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.319 [2024-12-10 03:08:05.678102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:11.319 [2024-12-10 03:08:05.678120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:11.319 [2024-12-10 03:08:05.678127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.582 [2024-12-10 03:08:05.703556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.582 [2024-12-10 03:08:05.703604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:11.582 [2024-12-10 03:08:05.703615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.412 ms 00:20:11.582 [2024-12-10 03:08:05.703622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.582 [2024-12-10 03:08:05.728930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.582 [2024-12-10 03:08:05.728979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:11.582 [2024-12-10 03:08:05.728989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.232 ms 00:20:11.582 [2024-12-10 03:08:05.728996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.582 [2024-12-10 03:08:05.753542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.582 [2024-12-10 03:08:05.753589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:11.582 [2024-12-10 03:08:05.753601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.501 ms 00:20:11.582 [2024-12-10 03:08:05.753608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.582 [2024-12-10 03:08:05.778081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.582 [2024-12-10 03:08:05.778128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:11.582 [2024-12-10 03:08:05.778138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.398 ms 00:20:11.582 [2024-12-10 03:08:05.778145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.582 [2024-12-10 03:08:05.778189] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:11.582 [2024-12-10 03:08:05.778204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:11.582 [2024-12-10 03:08:05.778431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.778996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.779004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.779012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.779021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:11.583 [2024-12-10 03:08:05.779037] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:11.583 [2024-12-10 03:08:05.779045] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa9db8c7-13a0-427a-b0c5-b6bfa89fe0fd 00:20:11.583 [2024-12-10 03:08:05.779055] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:11.583 [2024-12-10 03:08:05.779064] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:11.583 [2024-12-10 03:08:05.779071] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:11.583 [2024-12-10 03:08:05.779080] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:11.583 [2024-12-10 03:08:05.779088] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:11.583 [2024-12-10 03:08:05.779096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:11.583 [2024-12-10 03:08:05.779107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:11.583 [2024-12-10 03:08:05.779113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:11.583 [2024-12-10 03:08:05.779119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:11.583 [2024-12-10 03:08:05.779127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.583 [2024-12-10 03:08:05.779135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:11.583 [2024-12-10 03:08:05.779143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 00:20:11.583 [2024-12-10 03:08:05.779152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.583 [2024-12-10 03:08:05.792212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.583 [2024-12-10 03:08:05.792256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:11.583 [2024-12-10 03:08:05.792266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.028 ms 00:20:11.583 [2024-12-10 03:08:05.792274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.583 [2024-12-10 03:08:05.792694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.583 [2024-12-10 03:08:05.792713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:11.583 [2024-12-10 03:08:05.792723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:20:11.583 [2024-12-10 03:08:05.792731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.583 [2024-12-10 03:08:05.831295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.583 [2024-12-10 03:08:05.831348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:11.584 [2024-12-10 03:08:05.831360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.584 [2024-12-10 03:08:05.831391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.584 [2024-12-10 03:08:05.831490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.584 [2024-12-10 03:08:05.831501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:11.584 [2024-12-10 03:08:05.831510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.584 [2024-12-10 03:08:05.831518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.584 [2024-12-10 03:08:05.831571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.584 [2024-12-10 03:08:05.831581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:11.584 [2024-12-10 03:08:05.831590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.584 [2024-12-10 03:08:05.831597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.584 [2024-12-10 03:08:05.831620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.584 [2024-12-10 03:08:05.831628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:11.584 [2024-12-10 03:08:05.831636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.584 [2024-12-10 03:08:05.831643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.584 [2024-12-10 03:08:05.915072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.584 [2024-12-10 03:08:05.915134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:11.584 [2024-12-10 03:08:05.915147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.584 [2024-12-10 03:08:05.915162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.845 [2024-12-10 03:08:05.983810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.845 [2024-12-10 03:08:05.983870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:11.845 [2024-12-10 03:08:05.983884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.845 [2024-12-10 03:08:05.983893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.845 [2024-12-10 03:08:05.983988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.845 [2024-12-10 03:08:05.983999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.845 [2024-12-10 03:08:05.984008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.845 [2024-12-10 03:08:05.984017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.845 [2024-12-10 03:08:05.984052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.845 [2024-12-10 03:08:05.984068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.845 [2024-12-10 03:08:05.984077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.845 [2024-12-10 03:08:05.984085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.845 [2024-12-10 03:08:05.984185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.845 [2024-12-10 03:08:05.984196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.845 [2024-12-10 03:08:05.984205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.845 [2024-12-10 03:08:05.984213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.845 [2024-12-10 03:08:05.984248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.845 [2024-12-10 03:08:05.984258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:11.845 [2024-12-10 03:08:05.984270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.845 [2024-12-10 03:08:05.984278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.845 [2024-12-10 03:08:05.984322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.845 [2024-12-10 03:08:05.984332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.845 [2024-12-10 03:08:05.984340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.845 [2024-12-10 03:08:05.984349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.845 [2024-12-10 03:08:05.984425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.845 [2024-12-10 03:08:05.984440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.845 [2024-12-10 03:08:05.984450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.845 [2024-12-10 03:08:05.984458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.845 [2024-12-10 03:08:05.984611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.273 ms, result 0 00:20:12.417 00:20:12.417 00:20:12.417 03:08:06 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76981 00:20:12.417 03:08:06 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76981 00:20:12.417 03:08:06 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:12.417 03:08:06 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76981 ']' 00:20:12.417 03:08:06 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.417 03:08:06 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.417 03:08:06 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.417 03:08:06 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.417 03:08:06 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:12.679 [2024-12-10 03:08:06.841153] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:12.679 [2024-12-10 03:08:06.841303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76981 ] 00:20:12.679 [2024-12-10 03:08:06.996094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.939 [2024-12-10 03:08:07.111961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.507 03:08:07 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.507 03:08:07 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:13.507 03:08:07 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:13.765 [2024-12-10 03:08:07.943870] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:13.765 [2024-12-10 03:08:07.944050] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:13.765 [2024-12-10 03:08:08.117504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.765 [2024-12-10 03:08:08.117545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:13.765 [2024-12-10 03:08:08.117560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:13.765 [2024-12-10 03:08:08.117568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.765 [2024-12-10 03:08:08.120216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.765 [2024-12-10 03:08:08.120250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.766 [2024-12-10 03:08:08.120261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.629 ms 00:20:13.766 [2024-12-10 03:08:08.120269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.766 [2024-12-10 03:08:08.120341] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:13.766 [2024-12-10 03:08:08.121015] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:13.766 [2024-12-10 03:08:08.121041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.766 [2024-12-10 03:08:08.121049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.766 [2024-12-10 03:08:08.121059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:20:13.766 [2024-12-10 03:08:08.121066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.766 [2024-12-10 03:08:08.122171] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:13.766 [2024-12-10 03:08:08.134789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.766 [2024-12-10 03:08:08.134824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:13.766 [2024-12-10 03:08:08.134836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.622 ms 00:20:13.766 [2024-12-10 03:08:08.134846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.766 [2024-12-10 03:08:08.134924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.766 [2024-12-10 03:08:08.134936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:13.766 [2024-12-10 03:08:08.134945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:13.766 [2024-12-10 03:08:08.134953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.766 [2024-12-10 03:08:08.139678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.766 [2024-12-10 03:08:08.139713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:13.766 [2024-12-10 03:08:08.139722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.679 ms 00:20:13.766 [2024-12-10 03:08:08.139731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.766 [2024-12-10 03:08:08.139819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.766 [2024-12-10 03:08:08.139831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:13.766 [2024-12-10 03:08:08.139839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:13.766 [2024-12-10 03:08:08.139852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.766 [2024-12-10 03:08:08.139874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.766 [2024-12-10 03:08:08.139883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:13.766 [2024-12-10 03:08:08.139891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:13.766 [2024-12-10 03:08:08.139899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.766 [2024-12-10 03:08:08.139929] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:13.766 [2024-12-10 03:08:08.143271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.766 [2024-12-10 03:08:08.143409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:13.766 [2024-12-10 03:08:08.143428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.344 ms 00:20:13.766 [2024-12-10 03:08:08.143435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.766 [2024-12-10 03:08:08.143475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.766 [2024-12-10 03:08:08.143484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:13.766 [2024-12-10 03:08:08.143493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:13.766 [2024-12-10 03:08:08.143502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.766 [2024-12-10 03:08:08.143523] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:13.766 [2024-12-10 03:08:08.143542] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:13.766 [2024-12-10 03:08:08.143582] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:13.766 [2024-12-10 03:08:08.143596] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:13.766 [2024-12-10 03:08:08.143700] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:13.766 [2024-12-10 03:08:08.143711] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:13.766 [2024-12-10 03:08:08.143724] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:13.766 [2024-12-10 03:08:08.143734] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:13.766 [2024-12-10 03:08:08.143745] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:13.766 [2024-12-10 03:08:08.143752] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:13.766 [2024-12-10 03:08:08.143761] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:13.766 [2024-12-10 03:08:08.143768] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:13.766 [2024-12-10 03:08:08.143778] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:13.766 [2024-12-10 03:08:08.143786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.766 [2024-12-10 03:08:08.143794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:13.766 [2024-12-10 03:08:08.143801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:13.766 [2024-12-10 03:08:08.143809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.766 [2024-12-10 03:08:08.143926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.766 [2024-12-10 03:08:08.143937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:13.766 [2024-12-10 03:08:08.143945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:13.766 [2024-12-10 03:08:08.143953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.766 [2024-12-10 03:08:08.144053] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:13.766 [2024-12-10 03:08:08.144064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:13.766 [2024-12-10 03:08:08.144072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:13.766 [2024-12-10 03:08:08.144081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.766 [2024-12-10 03:08:08.144088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:13.766 [2024-12-10 03:08:08.144098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:13.766 [2024-12-10 03:08:08.144105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:13.766 [2024-12-10 03:08:08.144115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:13.766 [2024-12-10 03:08:08.144123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:13.766 [2024-12-10 03:08:08.144131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:13.766 [2024-12-10 03:08:08.144137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:13.766 [2024-12-10 03:08:08.144145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:13.766 [2024-12-10 03:08:08.144152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:13.766 [2024-12-10 03:08:08.144160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:13.766 [2024-12-10 03:08:08.144167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:13.766 [2024-12-10 03:08:08.144175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.766 [2024-12-10 03:08:08.144182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:13.766 [2024-12-10 03:08:08.144191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:13.766 [2024-12-10 03:08:08.144202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.766 [2024-12-10 03:08:08.144210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:13.766 [2024-12-10 03:08:08.144217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:13.766 [2024-12-10 03:08:08.144225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.766 [2024-12-10 03:08:08.144232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:13.766 [2024-12-10 03:08:08.144241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:13.766 [2024-12-10 03:08:08.144248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.766 [2024-12-10 03:08:08.144256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:13.766 [2024-12-10 03:08:08.144262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:13.766 [2024-12-10 03:08:08.144270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.766 [2024-12-10 03:08:08.144277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:13.766 [2024-12-10 03:08:08.144286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:13.766 [2024-12-10 03:08:08.144292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.766 [2024-12-10 03:08:08.144301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:13.766 [2024-12-10 03:08:08.144307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:13.766 [2024-12-10 03:08:08.144315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:13.766 [2024-12-10 03:08:08.144322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:13.766 [2024-12-10 03:08:08.144329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:13.766 [2024-12-10 03:08:08.144336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:13.766 [2024-12-10 03:08:08.144343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:13.766 [2024-12-10 03:08:08.144350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:13.766 [2024-12-10 03:08:08.144360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.766 [2024-12-10 03:08:08.144366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:13.766 [2024-12-10 03:08:08.144385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:13.766 [2024-12-10 03:08:08.144393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.766 [2024-12-10 03:08:08.144402] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:13.766 [2024-12-10 03:08:08.144411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:13.766 [2024-12-10 03:08:08.144420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:13.766 [2024-12-10 03:08:08.144427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.766 [2024-12-10 03:08:08.144435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:13.767 [2024-12-10 03:08:08.144444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:13.767 [2024-12-10 03:08:08.144452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:13.767 [2024-12-10 03:08:08.144459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:13.767 [2024-12-10 03:08:08.144467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:13.767 [2024-12-10 03:08:08.144474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:13.767 [2024-12-10 03:08:08.144484] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:13.767 [2024-12-10 03:08:08.144493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:13.767 [2024-12-10 03:08:08.144505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:13.767 [2024-12-10 03:08:08.144513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:13.767 [2024-12-10 03:08:08.144521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:13.767 [2024-12-10 03:08:08.144528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:13.767 [2024-12-10 03:08:08.144537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:13.767 [2024-12-10 03:08:08.144544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:13.767 [2024-12-10 03:08:08.144553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:13.767 [2024-12-10 03:08:08.144559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:13.767 [2024-12-10 03:08:08.144568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:13.767 [2024-12-10 03:08:08.144575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:13.767 [2024-12-10 03:08:08.144584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:13.767 [2024-12-10 03:08:08.144591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:13.767 [2024-12-10 03:08:08.144599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:13.767 [2024-12-10 03:08:08.144606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:13.767 [2024-12-10 03:08:08.144615] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:13.767 [2024-12-10 03:08:08.144623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:13.767 [2024-12-10 03:08:08.144634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:13.767 [2024-12-10 03:08:08.144641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:13.767 [2024-12-10 03:08:08.144650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:13.767 [2024-12-10 03:08:08.144657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:13.767 [2024-12-10 03:08:08.144666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.767 [2024-12-10 03:08:08.144673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:13.767 [2024-12-10 03:08:08.144681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:20:13.767 [2024-12-10 03:08:08.144690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.170149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.026 [2024-12-10 03:08:08.170184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:14.026 [2024-12-10 03:08:08.170197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.402 ms 00:20:14.026 [2024-12-10 03:08:08.170206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.170323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.026 [2024-12-10 03:08:08.170333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:14.026 [2024-12-10 03:08:08.170343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:14.026 [2024-12-10 03:08:08.170349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.200446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.026 [2024-12-10 03:08:08.200478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:14.026 [2024-12-10 03:08:08.200490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.074 ms 00:20:14.026 [2024-12-10 03:08:08.200497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.200550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.026 [2024-12-10 03:08:08.200559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:14.026 [2024-12-10 03:08:08.200569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:14.026 [2024-12-10 03:08:08.200576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.200881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.026 [2024-12-10 03:08:08.200894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:14.026 [2024-12-10 03:08:08.200906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:20:14.026 [2024-12-10 03:08:08.200913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.201035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.026 [2024-12-10 03:08:08.201044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:14.026 [2024-12-10 03:08:08.201053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:14.026 [2024-12-10 03:08:08.201061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.215171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.026 [2024-12-10 03:08:08.215200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.026 [2024-12-10 03:08:08.215211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.087 ms 00:20:14.026 [2024-12-10 03:08:08.215218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.242636] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:14.026 [2024-12-10 03:08:08.242674] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:14.026 [2024-12-10 03:08:08.242689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.026 [2024-12-10 03:08:08.242698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:14.026 [2024-12-10 03:08:08.242709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.363 ms 00:20:14.026 [2024-12-10 03:08:08.242721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.266815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.026 [2024-12-10 03:08:08.266859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:14.026 [2024-12-10 03:08:08.266873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.022 ms 00:20:14.026 [2024-12-10 03:08:08.266880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.278587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.026 [2024-12-10 03:08:08.278617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:14.026 [2024-12-10 03:08:08.278630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.637 ms 00:20:14.026 [2024-12-10 03:08:08.278638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.290249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.026 [2024-12-10 03:08:08.290277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:14.026 [2024-12-10 03:08:08.290290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.548 ms 00:20:14.026 [2024-12-10 03:08:08.290297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.290911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.026 [2024-12-10 03:08:08.290936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:14.026 [2024-12-10 03:08:08.290947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:20:14.026 [2024-12-10 03:08:08.290955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.346480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.026 [2024-12-10 03:08:08.346519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:14.026 [2024-12-10 03:08:08.346532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.501 ms 00:20:14.026 [2024-12-10 03:08:08.346540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.356735] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:14.026 [2024-12-10 03:08:08.370547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.026 [2024-12-10 03:08:08.370588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:14.026 [2024-12-10 03:08:08.370601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.926 ms 00:20:14.026 [2024-12-10 03:08:08.370611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.370683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.026 [2024-12-10 03:08:08.370695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:14.026 [2024-12-10 03:08:08.370703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:14.026 [2024-12-10 03:08:08.370712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.026 [2024-12-10 03:08:08.370758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.027 [2024-12-10 03:08:08.370768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:14.027 [2024-12-10 03:08:08.370776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:14.027 [2024-12-10 03:08:08.370786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.027 [2024-12-10 03:08:08.370810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.027 [2024-12-10 03:08:08.370819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:14.027 [2024-12-10 03:08:08.370827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:14.027 [2024-12-10 03:08:08.370838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.027 [2024-12-10 03:08:08.370868] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:14.027 [2024-12-10 03:08:08.370880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.027 [2024-12-10 03:08:08.370890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:14.027 [2024-12-10 03:08:08.370899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:14.027 [2024-12-10 03:08:08.370907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.027 [2024-12-10 03:08:08.394442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.027 [2024-12-10 03:08:08.394563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:14.027 [2024-12-10 03:08:08.394583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.511 ms 00:20:14.027 [2024-12-10 03:08:08.394591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.027 [2024-12-10 03:08:08.394672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.027 [2024-12-10 03:08:08.394683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:14.027 [2024-12-10 03:08:08.394694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:14.027 [2024-12-10 03:08:08.394704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.027 [2024-12-10 03:08:08.395452] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.027 [2024-12-10 03:08:08.398368] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 277.663 ms, result 0 00:20:14.027 [2024-12-10 03:08:08.400096] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:14.287 Some configs were skipped because the RPC state that can call them passed over. 00:20:14.287 03:08:08 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:14.287 [2024-12-10 03:08:08.631683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.287 [2024-12-10 03:08:08.631831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:14.287 [2024-12-10 03:08:08.631889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.877 ms 00:20:14.287 [2024-12-10 03:08:08.631926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.287 [2024-12-10 03:08:08.631976] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.170 ms, result 0 00:20:14.287 true 00:20:14.287 03:08:08 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:14.549 [2024-12-10 03:08:08.843852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.549 [2024-12-10 03:08:08.844029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:14.549 [2024-12-10 03:08:08.844094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.757 ms 00:20:14.549 [2024-12-10 03:08:08.844118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.549 [2024-12-10 03:08:08.844179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.088 ms, result 0 00:20:14.549 true 00:20:14.549 03:08:08 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76981 00:20:14.549 03:08:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76981 ']' 00:20:14.549 03:08:08 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76981 00:20:14.549 03:08:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:14.549 03:08:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.549 03:08:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76981 00:20:14.549 killing process with pid 76981 00:20:14.549 03:08:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:14.549 03:08:08 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:14.549 03:08:08 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76981' 00:20:14.549 03:08:08 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76981 00:20:14.549 03:08:08 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76981 00:20:15.489 [2024-12-10 03:08:09.552670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.489 [2024-12-10 03:08:09.552717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:15.489 [2024-12-10 03:08:09.552727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:15.489 [2024-12-10 03:08:09.552735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.489 [2024-12-10 03:08:09.552754] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:15.489 [2024-12-10 03:08:09.554852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.489 [2024-12-10 03:08:09.554875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:15.489 [2024-12-10 03:08:09.554886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:20:15.489 [2024-12-10 03:08:09.554892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.489 [2024-12-10 03:08:09.555116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.489 [2024-12-10 03:08:09.555123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:15.489 [2024-12-10 03:08:09.555131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:20:15.489 [2024-12-10 03:08:09.555137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.489 [2024-12-10 03:08:09.558296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.489 [2024-12-10 03:08:09.558323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:15.489 [2024-12-10 03:08:09.558333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.142 ms 00:20:15.489 [2024-12-10 03:08:09.558339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.489 [2024-12-10 03:08:09.563597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.489 [2024-12-10 03:08:09.563706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:15.489 [2024-12-10 03:08:09.563724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.227 ms 00:20:15.489 [2024-12-10 03:08:09.563731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.489 [2024-12-10 03:08:09.570845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.489 [2024-12-10 03:08:09.570947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:15.489 [2024-12-10 03:08:09.570963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.071 ms 00:20:15.489 [2024-12-10 03:08:09.570969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.489 [2024-12-10 03:08:09.577436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.489 [2024-12-10 03:08:09.577526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:15.489 [2024-12-10 03:08:09.577575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.437 ms 00:20:15.489 [2024-12-10 03:08:09.577594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.489 [2024-12-10 03:08:09.577706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.489 [2024-12-10 03:08:09.577727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:15.489 [2024-12-10 03:08:09.577744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:15.489 [2024-12-10 03:08:09.577788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.489 [2024-12-10 03:08:09.585344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.489 [2024-12-10 03:08:09.585451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.489 [2024-12-10 03:08:09.585502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.527 ms 00:20:15.489 [2024-12-10 03:08:09.585519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.490 [2024-12-10 03:08:09.593020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.490 [2024-12-10 03:08:09.593108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.490 [2024-12-10 03:08:09.593155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.454 ms 00:20:15.490 [2024-12-10 03:08:09.593172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.490 [2024-12-10 03:08:09.600204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.490 [2024-12-10 03:08:09.600284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.490 [2024-12-10 03:08:09.600325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.996 ms 00:20:15.490 [2024-12-10 03:08:09.600341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.490 [2024-12-10 03:08:09.607483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.490 [2024-12-10 03:08:09.607564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.490 [2024-12-10 03:08:09.607605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.079 ms 00:20:15.490 [2024-12-10 03:08:09.607621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.490 [2024-12-10 03:08:09.607661] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.490 [2024-12-10 03:08:09.607684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.607709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.607731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.607755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.607809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.607892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.607927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.607950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.607972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.607995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.608989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.609989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.610012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.610059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.610084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.610106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.610146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.610169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.610215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.490 [2024-12-10 03:08:09.610238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.491 [2024-12-10 03:08:09.610979] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.491 [2024-12-10 03:08:09.610999] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa9db8c7-13a0-427a-b0c5-b6bfa89fe0fd 00:20:15.491 [2024-12-10 03:08:09.611057] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.491 [2024-12-10 03:08:09.611075] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.491 [2024-12-10 03:08:09.611090] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.491 [2024-12-10 03:08:09.611106] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.491 [2024-12-10 03:08:09.611120] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.491 [2024-12-10 03:08:09.611136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.491 [2024-12-10 03:08:09.611174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.491 [2024-12-10 03:08:09.611192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.491 [2024-12-10 03:08:09.611206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.491 [2024-12-10 03:08:09.611222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.491 [2024-12-10 03:08:09.611236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.491 [2024-12-10 03:08:09.611278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.562 ms 00:20:15.491 [2024-12-10 03:08:09.611295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.491 [2024-12-10 03:08:09.620728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.491 [2024-12-10 03:08:09.620807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.491 [2024-12-10 03:08:09.620849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.402 ms 00:20:15.491 [2024-12-10 03:08:09.620866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.491 [2024-12-10 03:08:09.621156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.491 [2024-12-10 03:08:09.621212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.491 [2024-12-10 03:08:09.621254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:20:15.491 [2024-12-10 03:08:09.621271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.491 [2024-12-10 03:08:09.656157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.491 [2024-12-10 03:08:09.656246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.491 [2024-12-10 03:08:09.656286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.491 [2024-12-10 03:08:09.656304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.491 [2024-12-10 03:08:09.656403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.491 [2024-12-10 03:08:09.656424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.491 [2024-12-10 03:08:09.656443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.491 [2024-12-10 03:08:09.656457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.491 [2024-12-10 03:08:09.656502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.491 [2024-12-10 03:08:09.656520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.491 [2024-12-10 03:08:09.656530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.491 [2024-12-10 03:08:09.656535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.491 [2024-12-10 03:08:09.656550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.491 [2024-12-10 03:08:09.656556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.491 [2024-12-10 03:08:09.656563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.491 [2024-12-10 03:08:09.656570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.491 [2024-12-10 03:08:09.714744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.491 [2024-12-10 03:08:09.714774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.491 [2024-12-10 03:08:09.714784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.491 [2024-12-10 03:08:09.714790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.491 [2024-12-10 03:08:09.762562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.491 [2024-12-10 03:08:09.762593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.491 [2024-12-10 03:08:09.762603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.491 [2024-12-10 03:08:09.762611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.491 [2024-12-10 03:08:09.763558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.491 [2024-12-10 03:08:09.763581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.491 [2024-12-10 03:08:09.763591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.491 [2024-12-10 03:08:09.763597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.491 [2024-12-10 03:08:09.763625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.491 [2024-12-10 03:08:09.763632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.491 [2024-12-10 03:08:09.763639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.491 [2024-12-10 03:08:09.763646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.491 [2024-12-10 03:08:09.763721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.491 [2024-12-10 03:08:09.763729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.491 [2024-12-10 03:08:09.763737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.491 [2024-12-10 03:08:09.763743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.491 [2024-12-10 03:08:09.763768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.491 [2024-12-10 03:08:09.763775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:15.491 [2024-12-10 03:08:09.763783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.491 [2024-12-10 03:08:09.763788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.491 [2024-12-10 03:08:09.763818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.491 [2024-12-10 03:08:09.763825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.491 [2024-12-10 03:08:09.763834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.491 [2024-12-10 03:08:09.763840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.491 [2024-12-10 03:08:09.763873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.491 [2024-12-10 03:08:09.763880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.491 [2024-12-10 03:08:09.763888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.491 [2024-12-10 03:08:09.763894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.491 [2024-12-10 03:08:09.764011] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 211.323 ms, result 0 00:20:16.065 03:08:10 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:16.065 [2024-12-10 03:08:10.344788] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:16.065 [2024-12-10 03:08:10.344908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77028 ] 00:20:16.326 [2024-12-10 03:08:10.501206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.326 [2024-12-10 03:08:10.576154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.588 [2024-12-10 03:08:10.785051] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.588 [2024-12-10 03:08:10.785102] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.588 [2024-12-10 03:08:10.932484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.588 [2024-12-10 03:08:10.932517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:16.588 [2024-12-10 03:08:10.932528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:16.588 [2024-12-10 03:08:10.932534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.588 [2024-12-10 03:08:10.934598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.588 [2024-12-10 03:08:10.934713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.588 [2024-12-10 03:08:10.934726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.051 ms 00:20:16.588 [2024-12-10 03:08:10.934732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.588 [2024-12-10 03:08:10.934786] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:16.588 [2024-12-10 03:08:10.935337] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:16.588 [2024-12-10 03:08:10.935353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.588 [2024-12-10 03:08:10.935360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.588 [2024-12-10 03:08:10.935366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:20:16.588 [2024-12-10 03:08:10.935371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.588 [2024-12-10 03:08:10.936339] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:16.588 [2024-12-10 03:08:10.945911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.588 [2024-12-10 03:08:10.946018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:16.588 [2024-12-10 03:08:10.946031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.574 ms 00:20:16.588 [2024-12-10 03:08:10.946037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.588 [2024-12-10 03:08:10.946098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.588 [2024-12-10 03:08:10.946106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:16.588 [2024-12-10 03:08:10.946113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:16.588 [2024-12-10 03:08:10.946118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.588 [2024-12-10 03:08:10.950457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.588 [2024-12-10 03:08:10.950480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.588 [2024-12-10 03:08:10.950487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.310 ms 00:20:16.588 [2024-12-10 03:08:10.950493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.588 [2024-12-10 03:08:10.950565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.588 [2024-12-10 03:08:10.950573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.588 [2024-12-10 03:08:10.950579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:16.588 [2024-12-10 03:08:10.950587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.588 [2024-12-10 03:08:10.950602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.588 [2024-12-10 03:08:10.950608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:16.588 [2024-12-10 03:08:10.950614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:16.588 [2024-12-10 03:08:10.950619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.588 [2024-12-10 03:08:10.950636] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:16.588 [2024-12-10 03:08:10.953232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.588 [2024-12-10 03:08:10.953332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.588 [2024-12-10 03:08:10.953344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:20:16.588 [2024-12-10 03:08:10.953350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.588 [2024-12-10 03:08:10.953394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.588 [2024-12-10 03:08:10.953402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:16.588 [2024-12-10 03:08:10.953408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:16.588 [2024-12-10 03:08:10.953416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.588 [2024-12-10 03:08:10.953429] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:16.588 [2024-12-10 03:08:10.953444] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:16.588 [2024-12-10 03:08:10.953470] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:16.588 [2024-12-10 03:08:10.953481] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:16.588 [2024-12-10 03:08:10.953558] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:16.588 [2024-12-10 03:08:10.953566] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:16.588 [2024-12-10 03:08:10.953576] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:16.588 [2024-12-10 03:08:10.953584] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:16.588 [2024-12-10 03:08:10.953590] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:16.588 [2024-12-10 03:08:10.953596] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:16.588 [2024-12-10 03:08:10.953602] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:16.588 [2024-12-10 03:08:10.953608] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:16.588 [2024-12-10 03:08:10.953613] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:16.588 [2024-12-10 03:08:10.953619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.588 [2024-12-10 03:08:10.953625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:16.588 [2024-12-10 03:08:10.953631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:20:16.588 [2024-12-10 03:08:10.953636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.588 [2024-12-10 03:08:10.953704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.588 [2024-12-10 03:08:10.953710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:16.588 [2024-12-10 03:08:10.953716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:16.588 [2024-12-10 03:08:10.953721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.588 [2024-12-10 03:08:10.953795] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:16.588 [2024-12-10 03:08:10.953802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:16.589 [2024-12-10 03:08:10.953808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.589 [2024-12-10 03:08:10.953814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.589 [2024-12-10 03:08:10.953820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:16.589 [2024-12-10 03:08:10.953825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:16.589 [2024-12-10 03:08:10.953830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:16.589 [2024-12-10 03:08:10.953835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:16.589 [2024-12-10 03:08:10.953841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:16.589 [2024-12-10 03:08:10.953846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.589 [2024-12-10 03:08:10.953852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:16.589 [2024-12-10 03:08:10.953862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:16.589 [2024-12-10 03:08:10.953867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.589 [2024-12-10 03:08:10.953872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:16.589 [2024-12-10 03:08:10.953877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:16.589 [2024-12-10 03:08:10.953882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.589 [2024-12-10 03:08:10.953887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:16.589 [2024-12-10 03:08:10.953892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:16.589 [2024-12-10 03:08:10.953896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.589 [2024-12-10 03:08:10.953901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:16.589 [2024-12-10 03:08:10.953906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:16.589 [2024-12-10 03:08:10.953911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.589 [2024-12-10 03:08:10.953916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:16.589 [2024-12-10 03:08:10.953921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:16.589 [2024-12-10 03:08:10.953926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.589 [2024-12-10 03:08:10.953930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:16.589 [2024-12-10 03:08:10.953935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:16.589 [2024-12-10 03:08:10.953940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.589 [2024-12-10 03:08:10.953944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:16.589 [2024-12-10 03:08:10.953949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:16.589 [2024-12-10 03:08:10.953954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.589 [2024-12-10 03:08:10.953959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:16.589 [2024-12-10 03:08:10.953963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:16.589 [2024-12-10 03:08:10.953968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.589 [2024-12-10 03:08:10.953973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:16.589 [2024-12-10 03:08:10.953977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:16.589 [2024-12-10 03:08:10.953982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.589 [2024-12-10 03:08:10.953987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:16.589 [2024-12-10 03:08:10.953992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:16.589 [2024-12-10 03:08:10.953997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.589 [2024-12-10 03:08:10.954002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:16.589 [2024-12-10 03:08:10.954007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:16.589 [2024-12-10 03:08:10.954012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.589 [2024-12-10 03:08:10.954018] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:16.589 [2024-12-10 03:08:10.954026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:16.589 [2024-12-10 03:08:10.954031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.589 [2024-12-10 03:08:10.954037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.589 [2024-12-10 03:08:10.954043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:16.589 [2024-12-10 03:08:10.954048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:16.589 [2024-12-10 03:08:10.954053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:16.589 [2024-12-10 03:08:10.954058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:16.589 [2024-12-10 03:08:10.954063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:16.589 [2024-12-10 03:08:10.954068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:16.589 [2024-12-10 03:08:10.954075] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:16.589 [2024-12-10 03:08:10.954082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.589 [2024-12-10 03:08:10.954088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:16.589 [2024-12-10 03:08:10.954093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:16.589 [2024-12-10 03:08:10.954099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:16.589 [2024-12-10 03:08:10.954104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:16.589 [2024-12-10 03:08:10.954109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:16.589 [2024-12-10 03:08:10.954115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:16.589 [2024-12-10 03:08:10.954120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:16.589 [2024-12-10 03:08:10.954125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:16.589 [2024-12-10 03:08:10.954131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:16.589 [2024-12-10 03:08:10.954136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:16.589 [2024-12-10 03:08:10.954141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:16.589 [2024-12-10 03:08:10.954146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:16.589 [2024-12-10 03:08:10.954151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:16.589 [2024-12-10 03:08:10.954157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:16.589 [2024-12-10 03:08:10.954162] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:16.589 [2024-12-10 03:08:10.954168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.589 [2024-12-10 03:08:10.954177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:16.589 [2024-12-10 03:08:10.954183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:16.589 [2024-12-10 03:08:10.954188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:16.589 [2024-12-10 03:08:10.954194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:16.589 [2024-12-10 03:08:10.954200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.589 [2024-12-10 03:08:10.954206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:16.589 [2024-12-10 03:08:10.954212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:20:16.589 [2024-12-10 03:08:10.954219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:10.974787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:10.974814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.851 [2024-12-10 03:08:10.974822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.530 ms 00:20:16.851 [2024-12-10 03:08:10.974830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:10.974924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:10.974932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:16.851 [2024-12-10 03:08:10.974939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:16.851 [2024-12-10 03:08:10.974945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.016413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.016444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.851 [2024-12-10 03:08:11.016453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.451 ms 00:20:16.851 [2024-12-10 03:08:11.016459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.016516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.016525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.851 [2024-12-10 03:08:11.016532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:16.851 [2024-12-10 03:08:11.016538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.016816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.016828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.851 [2024-12-10 03:08:11.016837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:20:16.851 [2024-12-10 03:08:11.016843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.016946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.016953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.851 [2024-12-10 03:08:11.016959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:16.851 [2024-12-10 03:08:11.016964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.027611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.027727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.851 [2024-12-10 03:08:11.027740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.632 ms 00:20:16.851 [2024-12-10 03:08:11.027746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.037521] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:16.851 [2024-12-10 03:08:11.037548] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:16.851 [2024-12-10 03:08:11.037557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.037563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:16.851 [2024-12-10 03:08:11.037569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.721 ms 00:20:16.851 [2024-12-10 03:08:11.037575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.055823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.055850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:16.851 [2024-12-10 03:08:11.055858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.202 ms 00:20:16.851 [2024-12-10 03:08:11.055865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.064724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.064747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:16.851 [2024-12-10 03:08:11.064754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.808 ms 00:20:16.851 [2024-12-10 03:08:11.064760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.073283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.073305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:16.851 [2024-12-10 03:08:11.073312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.485 ms 00:20:16.851 [2024-12-10 03:08:11.073318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.073783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.073803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:16.851 [2024-12-10 03:08:11.073810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:20:16.851 [2024-12-10 03:08:11.073816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.116424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.116456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:16.851 [2024-12-10 03:08:11.116466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.590 ms 00:20:16.851 [2024-12-10 03:08:11.116473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.124109] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:16.851 [2024-12-10 03:08:11.135451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.135475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:16.851 [2024-12-10 03:08:11.135489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.916 ms 00:20:16.851 [2024-12-10 03:08:11.135496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.135567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.135575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:16.851 [2024-12-10 03:08:11.135582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:16.851 [2024-12-10 03:08:11.135587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.135623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.135630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:16.851 [2024-12-10 03:08:11.135639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:16.851 [2024-12-10 03:08:11.135646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.135668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.135675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:16.851 [2024-12-10 03:08:11.135681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:16.851 [2024-12-10 03:08:11.135687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.135710] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:16.851 [2024-12-10 03:08:11.135717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.135723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:16.851 [2024-12-10 03:08:11.135729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:16.851 [2024-12-10 03:08:11.135735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.153394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.153416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:16.851 [2024-12-10 03:08:11.153425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.642 ms 00:20:16.851 [2024-12-10 03:08:11.153431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.153497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.851 [2024-12-10 03:08:11.153505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:16.851 [2024-12-10 03:08:11.153512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:16.851 [2024-12-10 03:08:11.153521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.851 [2024-12-10 03:08:11.154132] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:16.851 [2024-12-10 03:08:11.156342] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.426 ms, result 0 00:20:16.851 [2024-12-10 03:08:11.156993] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:16.851 [2024-12-10 03:08:11.171880] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.238  [2024-12-10T03:08:13.574Z] Copying: 27/256 [MB] (27 MBps) [2024-12-10T03:08:14.517Z] Copying: 41/256 [MB] (13 MBps) [2024-12-10T03:08:15.462Z] Copying: 70/256 [MB] (29 MBps) [2024-12-10T03:08:16.491Z] Copying: 84/256 [MB] (14 MBps) [2024-12-10T03:08:17.435Z] Copying: 100/256 [MB] (15 MBps) [2024-12-10T03:08:18.378Z] Copying: 119/256 [MB] (18 MBps) [2024-12-10T03:08:19.322Z] Copying: 138/256 [MB] (19 MBps) [2024-12-10T03:08:20.266Z] Copying: 159/256 [MB] (20 MBps) [2024-12-10T03:08:21.652Z] Copying: 172/256 [MB] (13 MBps) [2024-12-10T03:08:22.225Z] Copying: 187/256 [MB] (14 MBps) [2024-12-10T03:08:23.613Z] Copying: 204/256 [MB] (17 MBps) [2024-12-10T03:08:24.556Z] Copying: 222/256 [MB] (18 MBps) [2024-12-10T03:08:25.499Z] Copying: 235/256 [MB] (12 MBps) [2024-12-10T03:08:25.761Z] Copying: 250/256 [MB] (15 MBps) [2024-12-10T03:08:25.761Z] Copying: 256/256 [MB] (average 17 MBps)[2024-12-10 03:08:25.725908] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:31.373 [2024-12-10 03:08:25.739114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.373 [2024-12-10 03:08:25.739312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:31.373 [2024-12-10 03:08:25.739426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:31.373 [2024-12-10 03:08:25.739454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.373 [2024-12-10 03:08:25.739536] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:31.373 [2024-12-10 03:08:25.742454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.373 [2024-12-10 03:08:25.742600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:31.373 [2024-12-10 03:08:25.742678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.876 ms 00:20:31.373 [2024-12-10 03:08:25.742708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.373 [2024-12-10 03:08:25.743059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.373 [2024-12-10 03:08:25.743149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:31.373 [2024-12-10 03:08:25.743214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:20:31.373 [2024-12-10 03:08:25.743238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.373 [2024-12-10 03:08:25.746989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.373 [2024-12-10 03:08:25.747081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:31.373 [2024-12-10 03:08:25.747141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.713 ms 00:20:31.373 [2024-12-10 03:08:25.747164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.373 [2024-12-10 03:08:25.754407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.373 [2024-12-10 03:08:25.754580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:31.373 [2024-12-10 03:08:25.754646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.207 ms 00:20:31.373 [2024-12-10 03:08:25.754670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.636 [2024-12-10 03:08:25.778683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.636 [2024-12-10 03:08:25.778815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:31.636 [2024-12-10 03:08:25.778874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.888 ms 00:20:31.636 [2024-12-10 03:08:25.778922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.636 [2024-12-10 03:08:25.793762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.636 [2024-12-10 03:08:25.793884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:31.636 [2024-12-10 03:08:25.793902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.788 ms 00:20:31.636 [2024-12-10 03:08:25.793911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.636 [2024-12-10 03:08:25.794042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.636 [2024-12-10 03:08:25.794056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:31.636 [2024-12-10 03:08:25.794073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:31.636 [2024-12-10 03:08:25.794081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.636 [2024-12-10 03:08:25.817744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.636 [2024-12-10 03:08:25.817777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:31.636 [2024-12-10 03:08:25.817788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.646 ms 00:20:31.636 [2024-12-10 03:08:25.817795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.636 [2024-12-10 03:08:25.841142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.636 [2024-12-10 03:08:25.841278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:31.636 [2024-12-10 03:08:25.841294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.312 ms 00:20:31.636 [2024-12-10 03:08:25.841301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.636 [2024-12-10 03:08:25.864216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.636 [2024-12-10 03:08:25.864335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:31.636 [2024-12-10 03:08:25.864351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.884 ms 00:20:31.636 [2024-12-10 03:08:25.864358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.636 [2024-12-10 03:08:25.887198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.636 [2024-12-10 03:08:25.887331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:31.636 [2024-12-10 03:08:25.887347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.766 ms 00:20:31.636 [2024-12-10 03:08:25.887355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.636 [2024-12-10 03:08:25.887407] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:31.636 [2024-12-10 03:08:25.887422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:31.636 [2024-12-10 03:08:25.887871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.887878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.887886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.887894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.887903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.887920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.887928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.887936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.887944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.887952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.887960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.887968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.887975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.887984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.887991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:31.637 [2024-12-10 03:08:25.888228] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:31.637 [2024-12-10 03:08:25.888242] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa9db8c7-13a0-427a-b0c5-b6bfa89fe0fd 00:20:31.637 [2024-12-10 03:08:25.888251] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:31.637 [2024-12-10 03:08:25.888259] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:31.637 [2024-12-10 03:08:25.888266] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:31.637 [2024-12-10 03:08:25.888274] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:31.637 [2024-12-10 03:08:25.888282] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:31.637 [2024-12-10 03:08:25.888292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:31.637 [2024-12-10 03:08:25.888300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:31.637 [2024-12-10 03:08:25.888307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:31.637 [2024-12-10 03:08:25.888314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:31.637 [2024-12-10 03:08:25.888321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.637 [2024-12-10 03:08:25.888328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:31.637 [2024-12-10 03:08:25.888337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:20:31.637 [2024-12-10 03:08:25.888345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.637 [2024-12-10 03:08:25.901199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.637 [2024-12-10 03:08:25.901235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:31.637 [2024-12-10 03:08:25.901245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.812 ms 00:20:31.637 [2024-12-10 03:08:25.901258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.637 [2024-12-10 03:08:25.901679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.637 [2024-12-10 03:08:25.901690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:31.637 [2024-12-10 03:08:25.901700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:20:31.637 [2024-12-10 03:08:25.901708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.637 [2024-12-10 03:08:25.939456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.637 [2024-12-10 03:08:25.939505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.637 [2024-12-10 03:08:25.939522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.637 [2024-12-10 03:08:25.939531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.637 [2024-12-10 03:08:25.939638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.637 [2024-12-10 03:08:25.939648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.637 [2024-12-10 03:08:25.939658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.637 [2024-12-10 03:08:25.939666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.637 [2024-12-10 03:08:25.939718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.637 [2024-12-10 03:08:25.939727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.637 [2024-12-10 03:08:25.939736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.637 [2024-12-10 03:08:25.939746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.637 [2024-12-10 03:08:25.939765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.637 [2024-12-10 03:08:25.939773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.637 [2024-12-10 03:08:25.939782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.637 [2024-12-10 03:08:25.939789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.898 [2024-12-10 03:08:26.022426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.898 [2024-12-10 03:08:26.022481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.898 [2024-12-10 03:08:26.022495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.898 [2024-12-10 03:08:26.022510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.898 [2024-12-10 03:08:26.090396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.899 [2024-12-10 03:08:26.090451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.899 [2024-12-10 03:08:26.090463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.899 [2024-12-10 03:08:26.090472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.899 [2024-12-10 03:08:26.090544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.899 [2024-12-10 03:08:26.090554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.899 [2024-12-10 03:08:26.090563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.899 [2024-12-10 03:08:26.090572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.899 [2024-12-10 03:08:26.090611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.899 [2024-12-10 03:08:26.090621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.899 [2024-12-10 03:08:26.090630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.899 [2024-12-10 03:08:26.090637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.899 [2024-12-10 03:08:26.090742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.899 [2024-12-10 03:08:26.090754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.899 [2024-12-10 03:08:26.090763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.899 [2024-12-10 03:08:26.090771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.899 [2024-12-10 03:08:26.090812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.899 [2024-12-10 03:08:26.090826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:31.899 [2024-12-10 03:08:26.090835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.899 [2024-12-10 03:08:26.090842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.899 [2024-12-10 03:08:26.090885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.899 [2024-12-10 03:08:26.090894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.899 [2024-12-10 03:08:26.090903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.899 [2024-12-10 03:08:26.090911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.899 [2024-12-10 03:08:26.090959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.899 [2024-12-10 03:08:26.090969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.899 [2024-12-10 03:08:26.090978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.899 [2024-12-10 03:08:26.090986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.899 [2024-12-10 03:08:26.091136] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 352.017 ms, result 0 00:20:32.471 00:20:32.471 00:20:32.765 03:08:26 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:33.046 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:33.046 03:08:27 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:33.046 03:08:27 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:33.046 03:08:27 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:33.046 03:08:27 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:33.046 03:08:27 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:33.306 03:08:27 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:33.306 03:08:27 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76981 00:20:33.306 03:08:27 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76981 ']' 00:20:33.306 Process with pid 76981 is not found 00:20:33.306 03:08:27 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76981 00:20:33.307 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76981) - No such process 00:20:33.307 03:08:27 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76981 is not found' 00:20:33.307 ************************************ 00:20:33.307 END TEST ftl_trim 00:20:33.307 ************************************ 00:20:33.307 00:20:33.307 real 1m13.637s 00:20:33.307 user 1m30.425s 00:20:33.307 sys 0m13.734s 00:20:33.307 03:08:27 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.307 03:08:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:33.307 03:08:27 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:33.307 03:08:27 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:33.307 03:08:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.307 03:08:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:33.307 ************************************ 00:20:33.307 START TEST ftl_restore 00:20:33.307 ************************************ 00:20:33.307 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:33.307 * Looking for test storage... 00:20:33.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:33.307 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:33.307 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:20:33.307 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:33.307 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:33.307 03:08:27 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:33.568 03:08:27 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:33.568 03:08:27 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:33.568 03:08:27 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:33.568 03:08:27 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:33.568 03:08:27 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:33.568 03:08:27 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:33.568 03:08:27 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:33.568 03:08:27 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:33.568 03:08:27 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:33.568 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:33.568 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:33.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.568 --rc genhtml_branch_coverage=1 00:20:33.568 --rc genhtml_function_coverage=1 00:20:33.568 --rc genhtml_legend=1 00:20:33.568 --rc geninfo_all_blocks=1 00:20:33.568 --rc geninfo_unexecuted_blocks=1 00:20:33.568 00:20:33.568 ' 00:20:33.568 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:33.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.568 --rc genhtml_branch_coverage=1 00:20:33.568 --rc genhtml_function_coverage=1 00:20:33.568 --rc genhtml_legend=1 00:20:33.568 --rc geninfo_all_blocks=1 00:20:33.568 --rc geninfo_unexecuted_blocks=1 00:20:33.568 00:20:33.568 ' 00:20:33.568 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:33.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.568 --rc genhtml_branch_coverage=1 00:20:33.568 --rc genhtml_function_coverage=1 00:20:33.568 --rc genhtml_legend=1 00:20:33.568 --rc geninfo_all_blocks=1 00:20:33.568 --rc geninfo_unexecuted_blocks=1 00:20:33.568 00:20:33.568 ' 00:20:33.568 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:33.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.568 --rc genhtml_branch_coverage=1 00:20:33.568 --rc genhtml_function_coverage=1 00:20:33.568 --rc genhtml_legend=1 00:20:33.568 --rc geninfo_all_blocks=1 00:20:33.568 --rc geninfo_unexecuted_blocks=1 00:20:33.568 00:20:33.568 ' 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:33.568 03:08:27 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.IYI63Ph7MZ 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77270 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77270 00:20:33.569 03:08:27 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:33.569 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77270 ']' 00:20:33.569 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.569 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.569 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.569 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.569 03:08:27 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:33.569 [2024-12-10 03:08:27.808676] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:33.569 [2024-12-10 03:08:27.809062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77270 ] 00:20:33.831 [2024-12-10 03:08:27.971471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.831 [2024-12-10 03:08:28.077079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.403 03:08:28 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.403 03:08:28 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:34.403 03:08:28 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:34.403 03:08:28 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:34.403 03:08:28 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:34.404 03:08:28 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:34.404 03:08:28 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:34.404 03:08:28 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:34.976 03:08:29 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:34.976 03:08:29 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:34.976 03:08:29 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:34.976 03:08:29 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:34.976 03:08:29 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:34.976 03:08:29 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:34.976 03:08:29 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:34.976 03:08:29 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:34.976 03:08:29 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:34.976 { 00:20:34.976 "name": "nvme0n1", 00:20:34.976 "aliases": [ 00:20:34.976 "bf01c4b1-9ff7-44b8-87c5-ae6c50c48a8d" 00:20:34.976 ], 00:20:34.976 "product_name": "NVMe disk", 00:20:34.976 "block_size": 4096, 00:20:34.976 "num_blocks": 1310720, 00:20:34.976 "uuid": "bf01c4b1-9ff7-44b8-87c5-ae6c50c48a8d", 00:20:34.976 "numa_id": -1, 00:20:34.976 "assigned_rate_limits": { 00:20:34.976 "rw_ios_per_sec": 0, 00:20:34.976 "rw_mbytes_per_sec": 0, 00:20:34.976 "r_mbytes_per_sec": 0, 00:20:34.976 "w_mbytes_per_sec": 0 00:20:34.976 }, 00:20:34.976 "claimed": true, 00:20:34.976 "claim_type": "read_many_write_one", 00:20:34.976 "zoned": false, 00:20:34.976 "supported_io_types": { 00:20:34.976 "read": true, 00:20:34.976 "write": true, 00:20:34.976 "unmap": true, 00:20:34.976 "flush": true, 00:20:34.976 "reset": true, 00:20:34.976 "nvme_admin": true, 00:20:34.976 "nvme_io": true, 00:20:34.976 "nvme_io_md": false, 00:20:34.977 "write_zeroes": true, 00:20:34.977 "zcopy": false, 00:20:34.977 "get_zone_info": false, 00:20:34.977 "zone_management": false, 00:20:34.977 "zone_append": false, 00:20:34.977 "compare": true, 00:20:34.977 "compare_and_write": false, 00:20:34.977 "abort": true, 00:20:34.977 "seek_hole": false, 00:20:34.977 "seek_data": false, 00:20:34.977 "copy": true, 00:20:34.977 "nvme_iov_md": false 00:20:34.977 }, 00:20:34.977 "driver_specific": { 00:20:34.977 "nvme": [ 00:20:34.977 { 00:20:34.977 "pci_address": "0000:00:11.0", 00:20:34.977 "trid": { 00:20:34.977 "trtype": "PCIe", 00:20:34.977 "traddr": "0000:00:11.0" 00:20:34.977 }, 00:20:34.977 "ctrlr_data": { 00:20:34.977 "cntlid": 0, 00:20:34.977 "vendor_id": "0x1b36", 00:20:34.977 "model_number": "QEMU NVMe Ctrl", 00:20:34.977 "serial_number": "12341", 00:20:34.977 "firmware_revision": "8.0.0", 00:20:34.977 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:34.977 "oacs": { 00:20:34.977 "security": 0, 00:20:34.977 "format": 1, 00:20:34.977 "firmware": 0, 00:20:34.977 "ns_manage": 1 00:20:34.977 }, 00:20:34.977 "multi_ctrlr": false, 00:20:34.977 "ana_reporting": false 00:20:34.977 }, 00:20:34.977 "vs": { 00:20:34.977 "nvme_version": "1.4" 00:20:34.977 }, 00:20:34.977 "ns_data": { 00:20:34.977 "id": 1, 00:20:34.977 "can_share": false 00:20:34.977 } 00:20:34.977 } 00:20:34.977 ], 00:20:34.977 "mp_policy": "active_passive" 00:20:34.977 } 00:20:34.977 } 00:20:34.977 ]' 00:20:34.977 03:08:29 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:34.977 03:08:29 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:34.977 03:08:29 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:34.977 03:08:29 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:34.977 03:08:29 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:34.977 03:08:29 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:34.977 03:08:29 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:34.977 03:08:29 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:34.977 03:08:29 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:34.977 03:08:29 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:34.977 03:08:29 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:35.238 03:08:29 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=eaefe0e7-a23c-4b59-95ad-fd424fdcb5cc 00:20:35.238 03:08:29 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:35.238 03:08:29 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eaefe0e7-a23c-4b59-95ad-fd424fdcb5cc 00:20:35.500 03:08:29 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:35.762 03:08:30 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=79510b54-bed7-43d9-a4d5-083382eb4b70 00:20:35.762 03:08:30 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 79510b54-bed7-43d9-a4d5-083382eb4b70 00:20:36.023 03:08:30 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=ab2dae3a-26e7-44c1-b53c-f9f4096ce657 00:20:36.023 03:08:30 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:36.023 03:08:30 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ab2dae3a-26e7-44c1-b53c-f9f4096ce657 00:20:36.023 03:08:30 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:36.023 03:08:30 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:36.023 03:08:30 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=ab2dae3a-26e7-44c1-b53c-f9f4096ce657 00:20:36.023 03:08:30 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:36.023 03:08:30 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size ab2dae3a-26e7-44c1-b53c-f9f4096ce657 00:20:36.023 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=ab2dae3a-26e7-44c1-b53c-f9f4096ce657 00:20:36.023 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:36.023 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:36.023 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:36.023 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ab2dae3a-26e7-44c1-b53c-f9f4096ce657 00:20:36.285 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:36.285 { 00:20:36.285 "name": "ab2dae3a-26e7-44c1-b53c-f9f4096ce657", 00:20:36.285 "aliases": [ 00:20:36.285 "lvs/nvme0n1p0" 00:20:36.285 ], 00:20:36.285 "product_name": "Logical Volume", 00:20:36.285 "block_size": 4096, 00:20:36.285 "num_blocks": 26476544, 00:20:36.285 "uuid": "ab2dae3a-26e7-44c1-b53c-f9f4096ce657", 00:20:36.285 "assigned_rate_limits": { 00:20:36.285 "rw_ios_per_sec": 0, 00:20:36.285 "rw_mbytes_per_sec": 0, 00:20:36.285 "r_mbytes_per_sec": 0, 00:20:36.285 "w_mbytes_per_sec": 0 00:20:36.285 }, 00:20:36.285 "claimed": false, 00:20:36.285 "zoned": false, 00:20:36.285 "supported_io_types": { 00:20:36.285 "read": true, 00:20:36.285 "write": true, 00:20:36.285 "unmap": true, 00:20:36.285 "flush": false, 00:20:36.285 "reset": true, 00:20:36.285 "nvme_admin": false, 00:20:36.285 "nvme_io": false, 00:20:36.285 "nvme_io_md": false, 00:20:36.285 "write_zeroes": true, 00:20:36.285 "zcopy": false, 00:20:36.285 "get_zone_info": false, 00:20:36.285 "zone_management": false, 00:20:36.285 "zone_append": false, 00:20:36.285 "compare": false, 00:20:36.285 "compare_and_write": false, 00:20:36.285 "abort": false, 00:20:36.285 "seek_hole": true, 00:20:36.285 "seek_data": true, 00:20:36.285 "copy": false, 00:20:36.285 "nvme_iov_md": false 00:20:36.285 }, 00:20:36.285 "driver_specific": { 00:20:36.285 "lvol": { 00:20:36.285 "lvol_store_uuid": "79510b54-bed7-43d9-a4d5-083382eb4b70", 00:20:36.285 "base_bdev": "nvme0n1", 00:20:36.285 "thin_provision": true, 00:20:36.285 "num_allocated_clusters": 0, 00:20:36.285 "snapshot": false, 00:20:36.286 "clone": false, 00:20:36.286 "esnap_clone": false 00:20:36.286 } 00:20:36.286 } 00:20:36.286 } 00:20:36.286 ]' 00:20:36.286 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:36.286 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:36.286 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:36.286 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:36.286 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:36.286 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:36.286 03:08:30 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:36.286 03:08:30 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:36.286 03:08:30 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:36.548 03:08:30 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:36.548 03:08:30 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:36.548 03:08:30 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size ab2dae3a-26e7-44c1-b53c-f9f4096ce657 00:20:36.548 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=ab2dae3a-26e7-44c1-b53c-f9f4096ce657 00:20:36.548 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:36.548 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:36.548 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:36.548 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ab2dae3a-26e7-44c1-b53c-f9f4096ce657 00:20:36.808 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:36.808 { 00:20:36.808 "name": "ab2dae3a-26e7-44c1-b53c-f9f4096ce657", 00:20:36.808 "aliases": [ 00:20:36.808 "lvs/nvme0n1p0" 00:20:36.808 ], 00:20:36.808 "product_name": "Logical Volume", 00:20:36.808 "block_size": 4096, 00:20:36.808 "num_blocks": 26476544, 00:20:36.808 "uuid": "ab2dae3a-26e7-44c1-b53c-f9f4096ce657", 00:20:36.808 "assigned_rate_limits": { 00:20:36.808 "rw_ios_per_sec": 0, 00:20:36.808 "rw_mbytes_per_sec": 0, 00:20:36.808 "r_mbytes_per_sec": 0, 00:20:36.808 "w_mbytes_per_sec": 0 00:20:36.808 }, 00:20:36.808 "claimed": false, 00:20:36.808 "zoned": false, 00:20:36.808 "supported_io_types": { 00:20:36.808 "read": true, 00:20:36.808 "write": true, 00:20:36.808 "unmap": true, 00:20:36.808 "flush": false, 00:20:36.808 "reset": true, 00:20:36.808 "nvme_admin": false, 00:20:36.808 "nvme_io": false, 00:20:36.808 "nvme_io_md": false, 00:20:36.808 "write_zeroes": true, 00:20:36.808 "zcopy": false, 00:20:36.808 "get_zone_info": false, 00:20:36.808 "zone_management": false, 00:20:36.808 "zone_append": false, 00:20:36.808 "compare": false, 00:20:36.808 "compare_and_write": false, 00:20:36.808 "abort": false, 00:20:36.808 "seek_hole": true, 00:20:36.808 "seek_data": true, 00:20:36.808 "copy": false, 00:20:36.808 "nvme_iov_md": false 00:20:36.808 }, 00:20:36.808 "driver_specific": { 00:20:36.808 "lvol": { 00:20:36.808 "lvol_store_uuid": "79510b54-bed7-43d9-a4d5-083382eb4b70", 00:20:36.808 "base_bdev": "nvme0n1", 00:20:36.808 "thin_provision": true, 00:20:36.808 "num_allocated_clusters": 0, 00:20:36.808 "snapshot": false, 00:20:36.808 "clone": false, 00:20:36.808 "esnap_clone": false 00:20:36.809 } 00:20:36.809 } 00:20:36.809 } 00:20:36.809 ]' 00:20:36.809 03:08:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:36.809 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:36.809 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:36.809 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:36.809 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:36.809 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:36.809 03:08:31 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:36.809 03:08:31 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:37.070 03:08:31 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:37.070 03:08:31 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size ab2dae3a-26e7-44c1-b53c-f9f4096ce657 00:20:37.070 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=ab2dae3a-26e7-44c1-b53c-f9f4096ce657 00:20:37.070 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:37.070 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:37.070 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:37.070 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ab2dae3a-26e7-44c1-b53c-f9f4096ce657 00:20:37.332 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:37.332 { 00:20:37.332 "name": "ab2dae3a-26e7-44c1-b53c-f9f4096ce657", 00:20:37.332 "aliases": [ 00:20:37.332 "lvs/nvme0n1p0" 00:20:37.332 ], 00:20:37.333 "product_name": "Logical Volume", 00:20:37.333 "block_size": 4096, 00:20:37.333 "num_blocks": 26476544, 00:20:37.333 "uuid": "ab2dae3a-26e7-44c1-b53c-f9f4096ce657", 00:20:37.333 "assigned_rate_limits": { 00:20:37.333 "rw_ios_per_sec": 0, 00:20:37.333 "rw_mbytes_per_sec": 0, 00:20:37.333 "r_mbytes_per_sec": 0, 00:20:37.333 "w_mbytes_per_sec": 0 00:20:37.333 }, 00:20:37.333 "claimed": false, 00:20:37.333 "zoned": false, 00:20:37.333 "supported_io_types": { 00:20:37.333 "read": true, 00:20:37.333 "write": true, 00:20:37.333 "unmap": true, 00:20:37.333 "flush": false, 00:20:37.333 "reset": true, 00:20:37.333 "nvme_admin": false, 00:20:37.333 "nvme_io": false, 00:20:37.333 "nvme_io_md": false, 00:20:37.333 "write_zeroes": true, 00:20:37.333 "zcopy": false, 00:20:37.333 "get_zone_info": false, 00:20:37.333 "zone_management": false, 00:20:37.333 "zone_append": false, 00:20:37.333 "compare": false, 00:20:37.333 "compare_and_write": false, 00:20:37.333 "abort": false, 00:20:37.333 "seek_hole": true, 00:20:37.333 "seek_data": true, 00:20:37.333 "copy": false, 00:20:37.333 "nvme_iov_md": false 00:20:37.333 }, 00:20:37.333 "driver_specific": { 00:20:37.333 "lvol": { 00:20:37.333 "lvol_store_uuid": "79510b54-bed7-43d9-a4d5-083382eb4b70", 00:20:37.333 "base_bdev": "nvme0n1", 00:20:37.333 "thin_provision": true, 00:20:37.333 "num_allocated_clusters": 0, 00:20:37.333 "snapshot": false, 00:20:37.333 "clone": false, 00:20:37.333 "esnap_clone": false 00:20:37.333 } 00:20:37.333 } 00:20:37.333 } 00:20:37.333 ]' 00:20:37.333 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:37.333 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:37.333 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:37.333 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:37.333 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:37.333 03:08:31 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:37.333 03:08:31 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:37.333 03:08:31 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ab2dae3a-26e7-44c1-b53c-f9f4096ce657 --l2p_dram_limit 10' 00:20:37.333 03:08:31 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:37.333 03:08:31 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:37.333 03:08:31 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:37.333 03:08:31 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:37.333 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:37.333 03:08:31 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ab2dae3a-26e7-44c1-b53c-f9f4096ce657 --l2p_dram_limit 10 -c nvc0n1p0 00:20:37.333 [2024-12-10 03:08:31.684728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.333 [2024-12-10 03:08:31.684765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:37.333 [2024-12-10 03:08:31.684778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:37.333 [2024-12-10 03:08:31.684785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.333 [2024-12-10 03:08:31.684831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.333 [2024-12-10 03:08:31.684839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:37.333 [2024-12-10 03:08:31.684846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:37.333 [2024-12-10 03:08:31.684852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.333 [2024-12-10 03:08:31.684872] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:37.333 [2024-12-10 03:08:31.685472] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:37.333 [2024-12-10 03:08:31.685488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.333 [2024-12-10 03:08:31.685494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:37.333 [2024-12-10 03:08:31.685503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:20:37.333 [2024-12-10 03:08:31.685508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.333 [2024-12-10 03:08:31.685535] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7042e1d7-5da6-4d4d-8fc3-9955076e703c 00:20:37.333 [2024-12-10 03:08:31.686485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.333 [2024-12-10 03:08:31.686515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:37.333 [2024-12-10 03:08:31.686523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:37.333 [2024-12-10 03:08:31.686530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.333 [2024-12-10 03:08:31.691172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.333 [2024-12-10 03:08:31.691202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:37.333 [2024-12-10 03:08:31.691211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.580 ms 00:20:37.333 [2024-12-10 03:08:31.691218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.333 [2024-12-10 03:08:31.691285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.333 [2024-12-10 03:08:31.691294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:37.333 [2024-12-10 03:08:31.691301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:37.333 [2024-12-10 03:08:31.691310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.333 [2024-12-10 03:08:31.691340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.333 [2024-12-10 03:08:31.691348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:37.333 [2024-12-10 03:08:31.691357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:37.333 [2024-12-10 03:08:31.691363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.333 [2024-12-10 03:08:31.691389] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:37.333 [2024-12-10 03:08:31.694335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.333 [2024-12-10 03:08:31.694454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:37.333 [2024-12-10 03:08:31.694471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.949 ms 00:20:37.333 [2024-12-10 03:08:31.694478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.333 [2024-12-10 03:08:31.694510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.333 [2024-12-10 03:08:31.694516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:37.333 [2024-12-10 03:08:31.694524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:37.333 [2024-12-10 03:08:31.694530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.333 [2024-12-10 03:08:31.694543] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:37.333 [2024-12-10 03:08:31.694654] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:37.333 [2024-12-10 03:08:31.694666] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:37.333 [2024-12-10 03:08:31.694675] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:37.333 [2024-12-10 03:08:31.694684] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:37.333 [2024-12-10 03:08:31.694690] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:37.333 [2024-12-10 03:08:31.694698] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:37.333 [2024-12-10 03:08:31.694703] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:37.333 [2024-12-10 03:08:31.694714] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:37.333 [2024-12-10 03:08:31.694719] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:37.333 [2024-12-10 03:08:31.694726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.333 [2024-12-10 03:08:31.694736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:37.333 [2024-12-10 03:08:31.694743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:20:37.333 [2024-12-10 03:08:31.694748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.333 [2024-12-10 03:08:31.694814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.333 [2024-12-10 03:08:31.694820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:37.333 [2024-12-10 03:08:31.694827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:37.333 [2024-12-10 03:08:31.694833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.333 [2024-12-10 03:08:31.694911] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:37.333 [2024-12-10 03:08:31.694918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:37.333 [2024-12-10 03:08:31.694926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:37.333 [2024-12-10 03:08:31.694932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.333 [2024-12-10 03:08:31.694939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:37.333 [2024-12-10 03:08:31.694945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:37.333 [2024-12-10 03:08:31.694951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:37.333 [2024-12-10 03:08:31.694956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:37.333 [2024-12-10 03:08:31.694963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:37.333 [2024-12-10 03:08:31.694968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:37.333 [2024-12-10 03:08:31.694975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:37.333 [2024-12-10 03:08:31.694980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:37.333 [2024-12-10 03:08:31.694987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:37.333 [2024-12-10 03:08:31.694992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:37.333 [2024-12-10 03:08:31.694998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:37.333 [2024-12-10 03:08:31.695003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.333 [2024-12-10 03:08:31.695012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:37.333 [2024-12-10 03:08:31.695017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:37.334 [2024-12-10 03:08:31.695023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.334 [2024-12-10 03:08:31.695028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:37.334 [2024-12-10 03:08:31.695035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:37.334 [2024-12-10 03:08:31.695039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.334 [2024-12-10 03:08:31.695046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:37.334 [2024-12-10 03:08:31.695051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:37.334 [2024-12-10 03:08:31.695057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.334 [2024-12-10 03:08:31.695064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:37.334 [2024-12-10 03:08:31.695070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:37.334 [2024-12-10 03:08:31.695075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.334 [2024-12-10 03:08:31.695081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:37.334 [2024-12-10 03:08:31.695086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:37.334 [2024-12-10 03:08:31.695092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.334 [2024-12-10 03:08:31.695097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:37.334 [2024-12-10 03:08:31.695105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:37.334 [2024-12-10 03:08:31.695110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:37.334 [2024-12-10 03:08:31.695116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:37.334 [2024-12-10 03:08:31.695121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:37.334 [2024-12-10 03:08:31.695128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:37.334 [2024-12-10 03:08:31.695133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:37.334 [2024-12-10 03:08:31.695139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:37.334 [2024-12-10 03:08:31.695144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.334 [2024-12-10 03:08:31.695151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:37.334 [2024-12-10 03:08:31.695156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:37.334 [2024-12-10 03:08:31.695162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.334 [2024-12-10 03:08:31.695166] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:37.334 [2024-12-10 03:08:31.695174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:37.334 [2024-12-10 03:08:31.695179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:37.334 [2024-12-10 03:08:31.695186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.334 [2024-12-10 03:08:31.695192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:37.334 [2024-12-10 03:08:31.695200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:37.334 [2024-12-10 03:08:31.695205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:37.334 [2024-12-10 03:08:31.695211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:37.334 [2024-12-10 03:08:31.695216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:37.334 [2024-12-10 03:08:31.695222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:37.334 [2024-12-10 03:08:31.695229] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:37.334 [2024-12-10 03:08:31.695238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:37.334 [2024-12-10 03:08:31.695245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:37.334 [2024-12-10 03:08:31.695251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:37.334 [2024-12-10 03:08:31.695258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:37.334 [2024-12-10 03:08:31.695264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:37.334 [2024-12-10 03:08:31.695270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:37.334 [2024-12-10 03:08:31.695277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:37.334 [2024-12-10 03:08:31.695282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:37.334 [2024-12-10 03:08:31.695290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:37.334 [2024-12-10 03:08:31.695295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:37.334 [2024-12-10 03:08:31.695303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:37.334 [2024-12-10 03:08:31.695308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:37.334 [2024-12-10 03:08:31.695315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:37.334 [2024-12-10 03:08:31.695321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:37.334 [2024-12-10 03:08:31.695327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:37.334 [2024-12-10 03:08:31.695333] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:37.334 [2024-12-10 03:08:31.695340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:37.334 [2024-12-10 03:08:31.695346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:37.334 [2024-12-10 03:08:31.695353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:37.334 [2024-12-10 03:08:31.695359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:37.334 [2024-12-10 03:08:31.695366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:37.334 [2024-12-10 03:08:31.695372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.334 [2024-12-10 03:08:31.695393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:37.334 [2024-12-10 03:08:31.695400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:20:37.334 [2024-12-10 03:08:31.695406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.334 [2024-12-10 03:08:31.695446] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:37.334 [2024-12-10 03:08:31.695458] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:42.628 [2024-12-10 03:08:36.136050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.628 [2024-12-10 03:08:36.136146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:42.628 [2024-12-10 03:08:36.136165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4440.584 ms 00:20:42.628 [2024-12-10 03:08:36.136176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.628 [2024-12-10 03:08:36.167414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.628 [2024-12-10 03:08:36.167477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.628 [2024-12-10 03:08:36.167492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.972 ms 00:20:42.628 [2024-12-10 03:08:36.167502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.628 [2024-12-10 03:08:36.167644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.628 [2024-12-10 03:08:36.167658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:42.628 [2024-12-10 03:08:36.167671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:42.628 [2024-12-10 03:08:36.167685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.202806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.202859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.629 [2024-12-10 03:08:36.202872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.086 ms 00:20:42.629 [2024-12-10 03:08:36.202884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.202924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.202936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.629 [2024-12-10 03:08:36.202945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:42.629 [2024-12-10 03:08:36.202963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.203566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.203594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.629 [2024-12-10 03:08:36.203604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:20:42.629 [2024-12-10 03:08:36.203614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.203730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.203745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.629 [2024-12-10 03:08:36.203755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:20:42.629 [2024-12-10 03:08:36.203767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.221099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.221148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.629 [2024-12-10 03:08:36.221161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.313 ms 00:20:42.629 [2024-12-10 03:08:36.221171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.249607] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:42.629 [2024-12-10 03:08:36.253415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.253461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:42.629 [2024-12-10 03:08:36.253476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.142 ms 00:20:42.629 [2024-12-10 03:08:36.253485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.355992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.356053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:42.629 [2024-12-10 03:08:36.356072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.460 ms 00:20:42.629 [2024-12-10 03:08:36.356081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.356290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.356303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:42.629 [2024-12-10 03:08:36.356317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:20:42.629 [2024-12-10 03:08:36.356326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.381613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.381661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:42.629 [2024-12-10 03:08:36.381677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.231 ms 00:20:42.629 [2024-12-10 03:08:36.381688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.406409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.406603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:42.629 [2024-12-10 03:08:36.406630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.668 ms 00:20:42.629 [2024-12-10 03:08:36.406638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.407280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.407301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:42.629 [2024-12-10 03:08:36.407316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:20:42.629 [2024-12-10 03:08:36.407324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.492318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.492371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:42.629 [2024-12-10 03:08:36.492411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.908 ms 00:20:42.629 [2024-12-10 03:08:36.492421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.519251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.519295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:42.629 [2024-12-10 03:08:36.519311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.736 ms 00:20:42.629 [2024-12-10 03:08:36.519321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.544768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.544812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:42.629 [2024-12-10 03:08:36.544826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.395 ms 00:20:42.629 [2024-12-10 03:08:36.544834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.571060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.571105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:42.629 [2024-12-10 03:08:36.571119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.175 ms 00:20:42.629 [2024-12-10 03:08:36.571127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.571179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.571190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:42.629 [2024-12-10 03:08:36.571204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:42.629 [2024-12-10 03:08:36.571212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.571302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.629 [2024-12-10 03:08:36.571317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:42.629 [2024-12-10 03:08:36.571327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:42.629 [2024-12-10 03:08:36.571335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.629 [2024-12-10 03:08:36.572790] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4887.533 ms, result 0 00:20:42.629 { 00:20:42.629 "name": "ftl0", 00:20:42.629 "uuid": "7042e1d7-5da6-4d4d-8fc3-9955076e703c" 00:20:42.629 } 00:20:42.629 03:08:36 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:42.629 03:08:36 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:42.629 03:08:36 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:42.629 03:08:36 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:42.892 [2024-12-10 03:08:37.015835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.892 [2024-12-10 03:08:37.015901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:42.892 [2024-12-10 03:08:37.015927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:42.892 [2024-12-10 03:08:37.015938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.892 [2024-12-10 03:08:37.015963] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:42.892 [2024-12-10 03:08:37.019000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.892 [2024-12-10 03:08:37.019040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:42.892 [2024-12-10 03:08:37.019054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.014 ms 00:20:42.892 [2024-12-10 03:08:37.019063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.892 [2024-12-10 03:08:37.019336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.892 [2024-12-10 03:08:37.019347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:42.892 [2024-12-10 03:08:37.019359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:20:42.892 [2024-12-10 03:08:37.019367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.892 [2024-12-10 03:08:37.022631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.892 [2024-12-10 03:08:37.022656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:42.892 [2024-12-10 03:08:37.022669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:20:42.892 [2024-12-10 03:08:37.022678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.892 [2024-12-10 03:08:37.029013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.892 [2024-12-10 03:08:37.029053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:42.892 [2024-12-10 03:08:37.029067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.312 ms 00:20:42.892 [2024-12-10 03:08:37.029075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.892 [2024-12-10 03:08:37.056105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.892 [2024-12-10 03:08:37.056147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:42.892 [2024-12-10 03:08:37.056163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.950 ms 00:20:42.892 [2024-12-10 03:08:37.056172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.892 [2024-12-10 03:08:37.073691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.892 [2024-12-10 03:08:37.073745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:42.892 [2024-12-10 03:08:37.073760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.462 ms 00:20:42.892 [2024-12-10 03:08:37.073769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.892 [2024-12-10 03:08:37.073940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.892 [2024-12-10 03:08:37.073951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:42.892 [2024-12-10 03:08:37.073963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:20:42.892 [2024-12-10 03:08:37.073974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.892 [2024-12-10 03:08:37.098988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.892 [2024-12-10 03:08:37.099028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:42.892 [2024-12-10 03:08:37.099043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.991 ms 00:20:42.892 [2024-12-10 03:08:37.099050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.892 [2024-12-10 03:08:37.123634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.892 [2024-12-10 03:08:37.123672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:42.892 [2024-12-10 03:08:37.123685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.532 ms 00:20:42.892 [2024-12-10 03:08:37.123693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.892 [2024-12-10 03:08:37.148009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.892 [2024-12-10 03:08:37.148047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:42.892 [2024-12-10 03:08:37.148062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.261 ms 00:20:42.892 [2024-12-10 03:08:37.148071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.892 [2024-12-10 03:08:37.172640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.892 [2024-12-10 03:08:37.172678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:42.892 [2024-12-10 03:08:37.172693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.456 ms 00:20:42.892 [2024-12-10 03:08:37.172701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.892 [2024-12-10 03:08:37.172749] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:42.892 [2024-12-10 03:08:37.172769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.172992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.173003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.173010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.173020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.173028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.173037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.173045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.173055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.173063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.173073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.173081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.173092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.173100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.173110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.173118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:42.892 [2024-12-10 03:08:37.173130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:42.893 [2024-12-10 03:08:37.173716] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:42.893 [2024-12-10 03:08:37.173727] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7042e1d7-5da6-4d4d-8fc3-9955076e703c 00:20:42.893 [2024-12-10 03:08:37.173735] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:42.893 [2024-12-10 03:08:37.173750] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:42.893 [2024-12-10 03:08:37.173757] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:42.893 [2024-12-10 03:08:37.173767] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:42.893 [2024-12-10 03:08:37.173775] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:42.893 [2024-12-10 03:08:37.173786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:42.893 [2024-12-10 03:08:37.173794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:42.893 [2024-12-10 03:08:37.173803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:42.893 [2024-12-10 03:08:37.173810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:42.893 [2024-12-10 03:08:37.173819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.893 [2024-12-10 03:08:37.173827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:42.893 [2024-12-10 03:08:37.173838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:20:42.893 [2024-12-10 03:08:37.173848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.893 [2024-12-10 03:08:37.187616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.893 [2024-12-10 03:08:37.187662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:42.893 [2024-12-10 03:08:37.187677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.725 ms 00:20:42.893 [2024-12-10 03:08:37.187685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.893 [2024-12-10 03:08:37.188102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.893 [2024-12-10 03:08:37.188124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:42.893 [2024-12-10 03:08:37.188135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:20:42.893 [2024-12-10 03:08:37.188143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.893 [2024-12-10 03:08:37.234789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.893 [2024-12-10 03:08:37.234838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.893 [2024-12-10 03:08:37.234852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.893 [2024-12-10 03:08:37.234861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.893 [2024-12-10 03:08:37.234928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.893 [2024-12-10 03:08:37.234940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.893 [2024-12-10 03:08:37.234951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.893 [2024-12-10 03:08:37.234959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.893 [2024-12-10 03:08:37.235041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.893 [2024-12-10 03:08:37.235053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.893 [2024-12-10 03:08:37.235064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.893 [2024-12-10 03:08:37.235072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.893 [2024-12-10 03:08:37.235095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.893 [2024-12-10 03:08:37.235103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.893 [2024-12-10 03:08:37.235116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.893 [2024-12-10 03:08:37.235123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.155 [2024-12-10 03:08:37.319814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.155 [2024-12-10 03:08:37.319873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:43.155 [2024-12-10 03:08:37.319889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.155 [2024-12-10 03:08:37.319898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.155 [2024-12-10 03:08:37.389673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.155 [2024-12-10 03:08:37.389729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:43.155 [2024-12-10 03:08:37.389747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.155 [2024-12-10 03:08:37.389755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.155 [2024-12-10 03:08:37.389867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.155 [2024-12-10 03:08:37.389878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:43.155 [2024-12-10 03:08:37.389889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.155 [2024-12-10 03:08:37.389897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.155 [2024-12-10 03:08:37.389950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.155 [2024-12-10 03:08:37.389960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:43.155 [2024-12-10 03:08:37.389971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.155 [2024-12-10 03:08:37.389979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.155 [2024-12-10 03:08:37.390082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.155 [2024-12-10 03:08:37.390092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:43.155 [2024-12-10 03:08:37.390103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.155 [2024-12-10 03:08:37.390110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.155 [2024-12-10 03:08:37.390151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.155 [2024-12-10 03:08:37.390161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:43.155 [2024-12-10 03:08:37.390171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.155 [2024-12-10 03:08:37.390179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.155 [2024-12-10 03:08:37.390226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.155 [2024-12-10 03:08:37.390234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:43.155 [2024-12-10 03:08:37.390246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.155 [2024-12-10 03:08:37.390254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.155 [2024-12-10 03:08:37.390306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.155 [2024-12-10 03:08:37.390316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:43.155 [2024-12-10 03:08:37.390326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.155 [2024-12-10 03:08:37.390334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.155 [2024-12-10 03:08:37.390512] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 374.638 ms, result 0 00:20:43.155 true 00:20:43.155 03:08:37 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77270 00:20:43.155 03:08:37 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77270 ']' 00:20:43.155 03:08:37 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77270 00:20:43.155 03:08:37 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:43.155 03:08:37 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.155 03:08:37 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77270 00:20:43.155 03:08:37 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.155 killing process with pid 77270 00:20:43.155 03:08:37 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.155 03:08:37 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77270' 00:20:43.155 03:08:37 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77270 00:20:43.155 03:08:37 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77270 00:20:48.446 03:08:42 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:52.654 262144+0 records in 00:20:52.654 262144+0 records out 00:20:52.654 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.81395 s, 282 MB/s 00:20:52.654 03:08:46 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:53.597 03:08:47 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:53.597 [2024-12-10 03:08:47.958012] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:53.597 [2024-12-10 03:08:47.958107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77506 ] 00:20:53.857 [2024-12-10 03:08:48.110923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.857 [2024-12-10 03:08:48.208444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.115 [2024-12-10 03:08:48.463883] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.115 [2024-12-10 03:08:48.463958] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.374 [2024-12-10 03:08:48.620561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.374 [2024-12-10 03:08:48.620607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:54.374 [2024-12-10 03:08:48.620619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:54.374 [2024-12-10 03:08:48.620627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.374 [2024-12-10 03:08:48.620674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.374 [2024-12-10 03:08:48.620685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.374 [2024-12-10 03:08:48.620694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:54.374 [2024-12-10 03:08:48.620701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.374 [2024-12-10 03:08:48.620716] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:54.374 [2024-12-10 03:08:48.621518] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:54.374 [2024-12-10 03:08:48.621547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.374 [2024-12-10 03:08:48.621554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.374 [2024-12-10 03:08:48.621563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:20:54.374 [2024-12-10 03:08:48.621570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.374 [2024-12-10 03:08:48.622681] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:54.374 [2024-12-10 03:08:48.635354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.374 [2024-12-10 03:08:48.635407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:54.374 [2024-12-10 03:08:48.635418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.675 ms 00:20:54.374 [2024-12-10 03:08:48.635426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.374 [2024-12-10 03:08:48.635481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.374 [2024-12-10 03:08:48.635490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:54.374 [2024-12-10 03:08:48.635498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:54.374 [2024-12-10 03:08:48.635505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.374 [2024-12-10 03:08:48.640346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.374 [2024-12-10 03:08:48.640392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.374 [2024-12-10 03:08:48.640402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.793 ms 00:20:54.374 [2024-12-10 03:08:48.640413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.374 [2024-12-10 03:08:48.640478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.374 [2024-12-10 03:08:48.640487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.374 [2024-12-10 03:08:48.640495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:54.374 [2024-12-10 03:08:48.640502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.374 [2024-12-10 03:08:48.640547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.374 [2024-12-10 03:08:48.640557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:54.374 [2024-12-10 03:08:48.640565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:54.374 [2024-12-10 03:08:48.640572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.374 [2024-12-10 03:08:48.640595] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:54.374 [2024-12-10 03:08:48.643757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.374 [2024-12-10 03:08:48.643782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.374 [2024-12-10 03:08:48.643793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.167 ms 00:20:54.374 [2024-12-10 03:08:48.643800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.374 [2024-12-10 03:08:48.643830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.374 [2024-12-10 03:08:48.643838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:54.374 [2024-12-10 03:08:48.643845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:54.374 [2024-12-10 03:08:48.643852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.374 [2024-12-10 03:08:48.643871] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:54.374 [2024-12-10 03:08:48.643889] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:54.374 [2024-12-10 03:08:48.643939] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:54.374 [2024-12-10 03:08:48.643956] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:54.374 [2024-12-10 03:08:48.644063] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:54.374 [2024-12-10 03:08:48.644073] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:54.374 [2024-12-10 03:08:48.644084] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:54.374 [2024-12-10 03:08:48.644093] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:54.374 [2024-12-10 03:08:48.644102] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:54.374 [2024-12-10 03:08:48.644109] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:54.374 [2024-12-10 03:08:48.644117] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:54.374 [2024-12-10 03:08:48.644126] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:54.374 [2024-12-10 03:08:48.644133] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:54.374 [2024-12-10 03:08:48.644140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.374 [2024-12-10 03:08:48.644147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:54.374 [2024-12-10 03:08:48.644154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:20:54.374 [2024-12-10 03:08:48.644161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.374 [2024-12-10 03:08:48.644244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.374 [2024-12-10 03:08:48.644251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:54.374 [2024-12-10 03:08:48.644258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:54.374 [2024-12-10 03:08:48.644265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.374 [2024-12-10 03:08:48.644366] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:54.374 [2024-12-10 03:08:48.644409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:54.374 [2024-12-10 03:08:48.644418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.374 [2024-12-10 03:08:48.644426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.374 [2024-12-10 03:08:48.644433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:54.374 [2024-12-10 03:08:48.644440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:54.374 [2024-12-10 03:08:48.644447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:54.374 [2024-12-10 03:08:48.644454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:54.374 [2024-12-10 03:08:48.644461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:54.374 [2024-12-10 03:08:48.644467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.374 [2024-12-10 03:08:48.644474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:54.374 [2024-12-10 03:08:48.644481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:54.374 [2024-12-10 03:08:48.644488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.374 [2024-12-10 03:08:48.644499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:54.374 [2024-12-10 03:08:48.644506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:54.374 [2024-12-10 03:08:48.644513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.374 [2024-12-10 03:08:48.644520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:54.374 [2024-12-10 03:08:48.644526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:54.374 [2024-12-10 03:08:48.644532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.374 [2024-12-10 03:08:48.644539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:54.374 [2024-12-10 03:08:48.644546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:54.374 [2024-12-10 03:08:48.644552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.374 [2024-12-10 03:08:48.644559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:54.374 [2024-12-10 03:08:48.644565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:54.374 [2024-12-10 03:08:48.644572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.374 [2024-12-10 03:08:48.644578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:54.374 [2024-12-10 03:08:48.644584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:54.374 [2024-12-10 03:08:48.644590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.374 [2024-12-10 03:08:48.644597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:54.374 [2024-12-10 03:08:48.644604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:54.374 [2024-12-10 03:08:48.644610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.374 [2024-12-10 03:08:48.644616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:54.374 [2024-12-10 03:08:48.644623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:54.374 [2024-12-10 03:08:48.644629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.374 [2024-12-10 03:08:48.644635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:54.374 [2024-12-10 03:08:48.644641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:54.374 [2024-12-10 03:08:48.644648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.375 [2024-12-10 03:08:48.644654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:54.375 [2024-12-10 03:08:48.644660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:54.375 [2024-12-10 03:08:48.644667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.375 [2024-12-10 03:08:48.644673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:54.375 [2024-12-10 03:08:48.644679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:54.375 [2024-12-10 03:08:48.644686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.375 [2024-12-10 03:08:48.644693] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:54.375 [2024-12-10 03:08:48.644700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:54.375 [2024-12-10 03:08:48.644707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.375 [2024-12-10 03:08:48.644714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.375 [2024-12-10 03:08:48.644722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:54.375 [2024-12-10 03:08:48.644728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:54.375 [2024-12-10 03:08:48.644735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:54.375 [2024-12-10 03:08:48.644742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:54.375 [2024-12-10 03:08:48.644748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:54.375 [2024-12-10 03:08:48.644754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:54.375 [2024-12-10 03:08:48.644762] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:54.375 [2024-12-10 03:08:48.644771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.375 [2024-12-10 03:08:48.644783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:54.375 [2024-12-10 03:08:48.644790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:54.375 [2024-12-10 03:08:48.644797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:54.375 [2024-12-10 03:08:48.644804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:54.375 [2024-12-10 03:08:48.644811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:54.375 [2024-12-10 03:08:48.644818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:54.375 [2024-12-10 03:08:48.644825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:54.375 [2024-12-10 03:08:48.644832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:54.375 [2024-12-10 03:08:48.644839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:54.375 [2024-12-10 03:08:48.644846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:54.375 [2024-12-10 03:08:48.644853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:54.375 [2024-12-10 03:08:48.644860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:54.375 [2024-12-10 03:08:48.644867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:54.375 [2024-12-10 03:08:48.644874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:54.375 [2024-12-10 03:08:48.644881] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:54.375 [2024-12-10 03:08:48.644889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.375 [2024-12-10 03:08:48.644897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:54.375 [2024-12-10 03:08:48.644903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:54.375 [2024-12-10 03:08:48.644910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:54.375 [2024-12-10 03:08:48.644918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:54.375 [2024-12-10 03:08:48.644926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.375 [2024-12-10 03:08:48.644933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:54.375 [2024-12-10 03:08:48.644940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:20:54.375 [2024-12-10 03:08:48.644947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.375 [2024-12-10 03:08:48.670590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.375 [2024-12-10 03:08:48.670623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.375 [2024-12-10 03:08:48.670633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.592 ms 00:20:54.375 [2024-12-10 03:08:48.670643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.375 [2024-12-10 03:08:48.670725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.375 [2024-12-10 03:08:48.670733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:54.375 [2024-12-10 03:08:48.670742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:54.375 [2024-12-10 03:08:48.670749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.375 [2024-12-10 03:08:48.710145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.375 [2024-12-10 03:08:48.710183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.375 [2024-12-10 03:08:48.710195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.347 ms 00:20:54.375 [2024-12-10 03:08:48.710203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.375 [2024-12-10 03:08:48.710239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.375 [2024-12-10 03:08:48.710249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.375 [2024-12-10 03:08:48.710261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:54.375 [2024-12-10 03:08:48.710268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.375 [2024-12-10 03:08:48.710623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.375 [2024-12-10 03:08:48.710651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.375 [2024-12-10 03:08:48.710661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:20:54.375 [2024-12-10 03:08:48.710668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.375 [2024-12-10 03:08:48.710788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.375 [2024-12-10 03:08:48.710797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.375 [2024-12-10 03:08:48.710805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:54.375 [2024-12-10 03:08:48.710816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.375 [2024-12-10 03:08:48.723799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.375 [2024-12-10 03:08:48.723830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.375 [2024-12-10 03:08:48.723842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.964 ms 00:20:54.375 [2024-12-10 03:08:48.723849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.375 [2024-12-10 03:08:48.736235] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:54.375 [2024-12-10 03:08:48.736280] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:54.375 [2024-12-10 03:08:48.736291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.375 [2024-12-10 03:08:48.736298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:54.375 [2024-12-10 03:08:48.736307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.334 ms 00:20:54.375 [2024-12-10 03:08:48.736313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.633 [2024-12-10 03:08:48.760367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.633 [2024-12-10 03:08:48.760411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:54.633 [2024-12-10 03:08:48.760421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.018 ms 00:20:54.633 [2024-12-10 03:08:48.760428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.633 [2024-12-10 03:08:48.772358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.633 [2024-12-10 03:08:48.772395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:54.633 [2024-12-10 03:08:48.772405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.893 ms 00:20:54.633 [2024-12-10 03:08:48.772412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.633 [2024-12-10 03:08:48.783889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.633 [2024-12-10 03:08:48.783925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:54.633 [2024-12-10 03:08:48.783935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.446 ms 00:20:54.633 [2024-12-10 03:08:48.783942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.633 [2024-12-10 03:08:48.784544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.633 [2024-12-10 03:08:48.784567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:54.633 [2024-12-10 03:08:48.784576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:20:54.633 [2024-12-10 03:08:48.784586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.633 [2024-12-10 03:08:48.840270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.633 [2024-12-10 03:08:48.840327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:54.633 [2024-12-10 03:08:48.840340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.667 ms 00:20:54.633 [2024-12-10 03:08:48.840353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.633 [2024-12-10 03:08:48.850609] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:54.633 [2024-12-10 03:08:48.852838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.633 [2024-12-10 03:08:48.852870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:54.633 [2024-12-10 03:08:48.852881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.420 ms 00:20:54.633 [2024-12-10 03:08:48.852890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.633 [2024-12-10 03:08:48.852970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.633 [2024-12-10 03:08:48.852981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:54.633 [2024-12-10 03:08:48.852991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:54.633 [2024-12-10 03:08:48.852999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.633 [2024-12-10 03:08:48.853065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.633 [2024-12-10 03:08:48.853075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:54.633 [2024-12-10 03:08:48.853083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:54.633 [2024-12-10 03:08:48.853090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.633 [2024-12-10 03:08:48.853108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.633 [2024-12-10 03:08:48.853116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:54.633 [2024-12-10 03:08:48.853124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:54.633 [2024-12-10 03:08:48.853131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.633 [2024-12-10 03:08:48.853160] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:54.633 [2024-12-10 03:08:48.853171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.633 [2024-12-10 03:08:48.853178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:54.633 [2024-12-10 03:08:48.853186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:54.634 [2024-12-10 03:08:48.853193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.634 [2024-12-10 03:08:48.876873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.634 [2024-12-10 03:08:48.876910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:54.634 [2024-12-10 03:08:48.876922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.663 ms 00:20:54.634 [2024-12-10 03:08:48.876933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.634 [2024-12-10 03:08:48.876998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.634 [2024-12-10 03:08:48.877007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:54.634 [2024-12-10 03:08:48.877016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:54.634 [2024-12-10 03:08:48.877023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.634 [2024-12-10 03:08:48.877968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 257.008 ms, result 0 00:20:55.568  [2024-12-10T03:08:50.898Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-10T03:08:52.285Z] Copying: 41/1024 [MB] (25 MBps) [2024-12-10T03:08:53.230Z] Copying: 54/1024 [MB] (12 MBps) [2024-12-10T03:08:54.176Z] Copying: 92/1024 [MB] (37 MBps) [2024-12-10T03:08:55.116Z] Copying: 108/1024 [MB] (16 MBps) [2024-12-10T03:08:56.071Z] Copying: 125/1024 [MB] (16 MBps) [2024-12-10T03:08:57.015Z] Copying: 146/1024 [MB] (20 MBps) [2024-12-10T03:08:57.961Z] Copying: 162/1024 [MB] (15 MBps) [2024-12-10T03:08:58.905Z] Copying: 186/1024 [MB] (24 MBps) [2024-12-10T03:09:00.293Z] Copying: 231/1024 [MB] (44 MBps) [2024-12-10T03:09:01.241Z] Copying: 253/1024 [MB] (22 MBps) [2024-12-10T03:09:02.182Z] Copying: 273/1024 [MB] (19 MBps) [2024-12-10T03:09:03.123Z] Copying: 292/1024 [MB] (18 MBps) [2024-12-10T03:09:04.067Z] Copying: 309/1024 [MB] (17 MBps) [2024-12-10T03:09:05.013Z] Copying: 325/1024 [MB] (15 MBps) [2024-12-10T03:09:05.956Z] Copying: 342/1024 [MB] (17 MBps) [2024-12-10T03:09:06.904Z] Copying: 352/1024 [MB] (10 MBps) [2024-12-10T03:09:08.286Z] Copying: 368/1024 [MB] (15 MBps) [2024-12-10T03:09:09.255Z] Copying: 396/1024 [MB] (28 MBps) [2024-12-10T03:09:10.200Z] Copying: 416/1024 [MB] (19 MBps) [2024-12-10T03:09:11.143Z] Copying: 433/1024 [MB] (17 MBps) [2024-12-10T03:09:12.086Z] Copying: 452/1024 [MB] (18 MBps) [2024-12-10T03:09:13.032Z] Copying: 469/1024 [MB] (16 MBps) [2024-12-10T03:09:13.977Z] Copying: 484/1024 [MB] (14 MBps) [2024-12-10T03:09:14.922Z] Copying: 494/1024 [MB] (10 MBps) [2024-12-10T03:09:16.308Z] Copying: 512/1024 [MB] (18 MBps) [2024-12-10T03:09:17.252Z] Copying: 525/1024 [MB] (13 MBps) [2024-12-10T03:09:18.196Z] Copying: 535/1024 [MB] (10 MBps) [2024-12-10T03:09:19.141Z] Copying: 551/1024 [MB] (15 MBps) [2024-12-10T03:09:20.086Z] Copying: 563/1024 [MB] (11 MBps) [2024-12-10T03:09:21.038Z] Copying: 577/1024 [MB] (13 MBps) [2024-12-10T03:09:22.003Z] Copying: 590/1024 [MB] (12 MBps) [2024-12-10T03:09:22.947Z] Copying: 604/1024 [MB] (14 MBps) [2024-12-10T03:09:24.333Z] Copying: 623/1024 [MB] (18 MBps) [2024-12-10T03:09:24.908Z] Copying: 636/1024 [MB] (13 MBps) [2024-12-10T03:09:26.303Z] Copying: 647/1024 [MB] (10 MBps) [2024-12-10T03:09:27.254Z] Copying: 657/1024 [MB] (10 MBps) [2024-12-10T03:09:28.203Z] Copying: 671/1024 [MB] (13 MBps) [2024-12-10T03:09:29.149Z] Copying: 682/1024 [MB] (11 MBps) [2024-12-10T03:09:30.092Z] Copying: 693/1024 [MB] (11 MBps) [2024-12-10T03:09:31.036Z] Copying: 703/1024 [MB] (10 MBps) [2024-12-10T03:09:31.983Z] Copying: 730936/1048576 [kB] (10080 kBps) [2024-12-10T03:09:32.927Z] Copying: 723/1024 [MB] (10 MBps) [2024-12-10T03:09:34.313Z] Copying: 734/1024 [MB] (10 MBps) [2024-12-10T03:09:34.957Z] Copying: 744/1024 [MB] (10 MBps) [2024-12-10T03:09:35.924Z] Copying: 754/1024 [MB] (10 MBps) [2024-12-10T03:09:37.312Z] Copying: 764/1024 [MB] (10 MBps) [2024-12-10T03:09:38.256Z] Copying: 774/1024 [MB] (10 MBps) [2024-12-10T03:09:39.201Z] Copying: 785/1024 [MB] (10 MBps) [2024-12-10T03:09:40.145Z] Copying: 795/1024 [MB] (10 MBps) [2024-12-10T03:09:41.090Z] Copying: 805/1024 [MB] (10 MBps) [2024-12-10T03:09:42.035Z] Copying: 815/1024 [MB] (10 MBps) [2024-12-10T03:09:42.979Z] Copying: 829/1024 [MB] (13 MBps) [2024-12-10T03:09:43.924Z] Copying: 847/1024 [MB] (17 MBps) [2024-12-10T03:09:45.312Z] Copying: 864/1024 [MB] (17 MBps) [2024-12-10T03:09:46.255Z] Copying: 876/1024 [MB] (11 MBps) [2024-12-10T03:09:47.199Z] Copying: 886/1024 [MB] (10 MBps) [2024-12-10T03:09:48.169Z] Copying: 901/1024 [MB] (14 MBps) [2024-12-10T03:09:49.114Z] Copying: 916/1024 [MB] (14 MBps) [2024-12-10T03:09:50.061Z] Copying: 928/1024 [MB] (12 MBps) [2024-12-10T03:09:51.007Z] Copying: 942/1024 [MB] (14 MBps) [2024-12-10T03:09:51.951Z] Copying: 955/1024 [MB] (12 MBps) [2024-12-10T03:09:52.898Z] Copying: 971/1024 [MB] (15 MBps) [2024-12-10T03:09:54.287Z] Copying: 992/1024 [MB] (21 MBps) [2024-12-10T03:09:54.861Z] Copying: 1009/1024 [MB] (16 MBps) [2024-12-10T03:09:54.861Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-10 03:09:54.700109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.473 [2024-12-10 03:09:54.700166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:00.473 [2024-12-10 03:09:54.700181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:00.473 [2024-12-10 03:09:54.700190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.473 [2024-12-10 03:09:54.700212] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:00.473 [2024-12-10 03:09:54.703258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.473 [2024-12-10 03:09:54.703296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:00.473 [2024-12-10 03:09:54.703315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:22:00.473 [2024-12-10 03:09:54.703323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.473 [2024-12-10 03:09:54.705467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.473 [2024-12-10 03:09:54.705511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:00.473 [2024-12-10 03:09:54.705522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.118 ms 00:22:00.473 [2024-12-10 03:09:54.705530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.473 [2024-12-10 03:09:54.723066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.473 [2024-12-10 03:09:54.723118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:00.473 [2024-12-10 03:09:54.723130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.518 ms 00:22:00.473 [2024-12-10 03:09:54.723137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.473 [2024-12-10 03:09:54.729351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.473 [2024-12-10 03:09:54.729402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:00.473 [2024-12-10 03:09:54.729414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.165 ms 00:22:00.473 [2024-12-10 03:09:54.729422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.473 [2024-12-10 03:09:54.755792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.473 [2024-12-10 03:09:54.755839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:00.473 [2024-12-10 03:09:54.755851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.311 ms 00:22:00.473 [2024-12-10 03:09:54.755860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.473 [2024-12-10 03:09:54.772170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.473 [2024-12-10 03:09:54.772216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:00.473 [2024-12-10 03:09:54.772228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.265 ms 00:22:00.473 [2024-12-10 03:09:54.772236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.473 [2024-12-10 03:09:54.772404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.473 [2024-12-10 03:09:54.772420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:00.473 [2024-12-10 03:09:54.772431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:22:00.473 [2024-12-10 03:09:54.772438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.473 [2024-12-10 03:09:54.797932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.473 [2024-12-10 03:09:54.797980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:00.473 [2024-12-10 03:09:54.797990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.478 ms 00:22:00.473 [2024-12-10 03:09:54.797997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.473 [2024-12-10 03:09:54.822870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.473 [2024-12-10 03:09:54.822914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:00.473 [2024-12-10 03:09:54.822925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.829 ms 00:22:00.473 [2024-12-10 03:09:54.822931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.473 [2024-12-10 03:09:54.847464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.473 [2024-12-10 03:09:54.847509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:00.473 [2024-12-10 03:09:54.847520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.490 ms 00:22:00.473 [2024-12-10 03:09:54.847526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.737 [2024-12-10 03:09:54.871883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.737 [2024-12-10 03:09:54.871952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:00.737 [2024-12-10 03:09:54.871964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.271 ms 00:22:00.737 [2024-12-10 03:09:54.871971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.737 [2024-12-10 03:09:54.872012] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:00.737 [2024-12-10 03:09:54.872028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:00.737 [2024-12-10 03:09:54.872561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:00.738 [2024-12-10 03:09:54.872832] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:00.738 [2024-12-10 03:09:54.872843] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7042e1d7-5da6-4d4d-8fc3-9955076e703c 00:22:00.738 [2024-12-10 03:09:54.872852] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:00.738 [2024-12-10 03:09:54.872859] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:00.738 [2024-12-10 03:09:54.872867] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:00.738 [2024-12-10 03:09:54.872875] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:00.738 [2024-12-10 03:09:54.872883] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:00.738 [2024-12-10 03:09:54.872899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:00.738 [2024-12-10 03:09:54.872907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:00.738 [2024-12-10 03:09:54.872913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:00.738 [2024-12-10 03:09:54.872919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:00.738 [2024-12-10 03:09:54.872926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.738 [2024-12-10 03:09:54.872934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:00.738 [2024-12-10 03:09:54.872943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:22:00.738 [2024-12-10 03:09:54.872950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.738 [2024-12-10 03:09:54.886547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.738 [2024-12-10 03:09:54.886588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:00.738 [2024-12-10 03:09:54.886601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.574 ms 00:22:00.738 [2024-12-10 03:09:54.886610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.738 [2024-12-10 03:09:54.887000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.738 [2024-12-10 03:09:54.887017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:00.738 [2024-12-10 03:09:54.887027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:22:00.738 [2024-12-10 03:09:54.887042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.738 [2024-12-10 03:09:54.923275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.738 [2024-12-10 03:09:54.923328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.738 [2024-12-10 03:09:54.923341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.738 [2024-12-10 03:09:54.923350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.738 [2024-12-10 03:09:54.923431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.738 [2024-12-10 03:09:54.923441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.738 [2024-12-10 03:09:54.923451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.738 [2024-12-10 03:09:54.923466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.738 [2024-12-10 03:09:54.923541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.738 [2024-12-10 03:09:54.923554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.738 [2024-12-10 03:09:54.923563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.738 [2024-12-10 03:09:54.923572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.738 [2024-12-10 03:09:54.923590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.738 [2024-12-10 03:09:54.923600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.738 [2024-12-10 03:09:54.923609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.738 [2024-12-10 03:09:54.923618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.738 [2024-12-10 03:09:55.006545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.738 [2024-12-10 03:09:55.006603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.738 [2024-12-10 03:09:55.006616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.738 [2024-12-10 03:09:55.006626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.738 [2024-12-10 03:09:55.074660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.738 [2024-12-10 03:09:55.074716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.738 [2024-12-10 03:09:55.074729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.738 [2024-12-10 03:09:55.074743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.738 [2024-12-10 03:09:55.074823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.738 [2024-12-10 03:09:55.074834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:00.738 [2024-12-10 03:09:55.074843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.738 [2024-12-10 03:09:55.074852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.738 [2024-12-10 03:09:55.074892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.738 [2024-12-10 03:09:55.074902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:00.738 [2024-12-10 03:09:55.074917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.738 [2024-12-10 03:09:55.074925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.738 [2024-12-10 03:09:55.075025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.738 [2024-12-10 03:09:55.075036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:00.738 [2024-12-10 03:09:55.075044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.738 [2024-12-10 03:09:55.075052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.738 [2024-12-10 03:09:55.075085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.738 [2024-12-10 03:09:55.075095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:00.738 [2024-12-10 03:09:55.075104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.738 [2024-12-10 03:09:55.075111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.738 [2024-12-10 03:09:55.075153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.738 [2024-12-10 03:09:55.075166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:00.738 [2024-12-10 03:09:55.075175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.738 [2024-12-10 03:09:55.075183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.738 [2024-12-10 03:09:55.075230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.738 [2024-12-10 03:09:55.075240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:00.738 [2024-12-10 03:09:55.075248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.738 [2024-12-10 03:09:55.075257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.739 [2024-12-10 03:09:55.075416] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.244 ms, result 0 00:22:01.683 00:22:01.683 00:22:01.683 03:09:55 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:01.683 [2024-12-10 03:09:56.051098] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:01.683 [2024-12-10 03:09:56.051242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78204 ] 00:22:01.945 [2024-12-10 03:09:56.214982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.206 [2024-12-10 03:09:56.331967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.469 [2024-12-10 03:09:56.627050] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.469 [2024-12-10 03:09:56.627138] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.469 [2024-12-10 03:09:56.788076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.469 [2024-12-10 03:09:56.788142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:02.469 [2024-12-10 03:09:56.788158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:02.469 [2024-12-10 03:09:56.788166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.469 [2024-12-10 03:09:56.788221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.469 [2024-12-10 03:09:56.788235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.469 [2024-12-10 03:09:56.788244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:02.469 [2024-12-10 03:09:56.788252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.469 [2024-12-10 03:09:56.788272] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:02.469 [2024-12-10 03:09:56.789016] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:02.469 [2024-12-10 03:09:56.789042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.469 [2024-12-10 03:09:56.789051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.469 [2024-12-10 03:09:56.789060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:22:02.469 [2024-12-10 03:09:56.789069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.469 [2024-12-10 03:09:56.790785] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:02.469 [2024-12-10 03:09:56.804986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.469 [2024-12-10 03:09:56.805032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:02.469 [2024-12-10 03:09:56.805046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.204 ms 00:22:02.469 [2024-12-10 03:09:56.805053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.469 [2024-12-10 03:09:56.805135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.469 [2024-12-10 03:09:56.805146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:02.469 [2024-12-10 03:09:56.805155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:02.469 [2024-12-10 03:09:56.805163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.469 [2024-12-10 03:09:56.813124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.469 [2024-12-10 03:09:56.813165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.469 [2024-12-10 03:09:56.813175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.884 ms 00:22:02.469 [2024-12-10 03:09:56.813189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.469 [2024-12-10 03:09:56.813265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.469 [2024-12-10 03:09:56.813275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.469 [2024-12-10 03:09:56.813283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:02.469 [2024-12-10 03:09:56.813291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.469 [2024-12-10 03:09:56.813333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.469 [2024-12-10 03:09:56.813344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:02.469 [2024-12-10 03:09:56.813352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:02.469 [2024-12-10 03:09:56.813360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.469 [2024-12-10 03:09:56.813410] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.469 [2024-12-10 03:09:56.817337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.469 [2024-12-10 03:09:56.817392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.469 [2024-12-10 03:09:56.817406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.933 ms 00:22:02.469 [2024-12-10 03:09:56.817414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.469 [2024-12-10 03:09:56.817453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.469 [2024-12-10 03:09:56.817462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:02.469 [2024-12-10 03:09:56.817471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:02.469 [2024-12-10 03:09:56.817478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.469 [2024-12-10 03:09:56.817529] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:02.469 [2024-12-10 03:09:56.817555] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:02.469 [2024-12-10 03:09:56.817593] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:02.469 [2024-12-10 03:09:56.817611] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:02.469 [2024-12-10 03:09:56.817726] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:02.469 [2024-12-10 03:09:56.817737] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:02.469 [2024-12-10 03:09:56.817749] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:02.469 [2024-12-10 03:09:56.817760] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:02.469 [2024-12-10 03:09:56.817770] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:02.469 [2024-12-10 03:09:56.817778] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:02.469 [2024-12-10 03:09:56.817787] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:02.469 [2024-12-10 03:09:56.817797] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:02.469 [2024-12-10 03:09:56.817805] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:02.469 [2024-12-10 03:09:56.817813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.469 [2024-12-10 03:09:56.817821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:02.469 [2024-12-10 03:09:56.817829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:22:02.469 [2024-12-10 03:09:56.817837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.469 [2024-12-10 03:09:56.817920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.469 [2024-12-10 03:09:56.817930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:02.469 [2024-12-10 03:09:56.817937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:02.469 [2024-12-10 03:09:56.817944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.469 [2024-12-10 03:09:56.818052] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:02.469 [2024-12-10 03:09:56.818063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:02.469 [2024-12-10 03:09:56.818072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.469 [2024-12-10 03:09:56.818080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.469 [2024-12-10 03:09:56.818089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:02.469 [2024-12-10 03:09:56.818097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:02.469 [2024-12-10 03:09:56.818104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:02.469 [2024-12-10 03:09:56.818112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:02.469 [2024-12-10 03:09:56.818120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:02.469 [2024-12-10 03:09:56.818127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.470 [2024-12-10 03:09:56.818135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:02.470 [2024-12-10 03:09:56.818142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:02.470 [2024-12-10 03:09:56.818149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.470 [2024-12-10 03:09:56.818166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:02.470 [2024-12-10 03:09:56.818174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:02.470 [2024-12-10 03:09:56.818181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.470 [2024-12-10 03:09:56.818188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:02.470 [2024-12-10 03:09:56.818195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:02.470 [2024-12-10 03:09:56.818202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.470 [2024-12-10 03:09:56.818210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:02.470 [2024-12-10 03:09:56.818216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:02.470 [2024-12-10 03:09:56.818223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.470 [2024-12-10 03:09:56.818230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:02.470 [2024-12-10 03:09:56.818236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:02.470 [2024-12-10 03:09:56.818242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.470 [2024-12-10 03:09:56.818249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:02.470 [2024-12-10 03:09:56.818255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:02.470 [2024-12-10 03:09:56.818261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.470 [2024-12-10 03:09:56.818267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:02.470 [2024-12-10 03:09:56.818274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:02.470 [2024-12-10 03:09:56.818280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.470 [2024-12-10 03:09:56.818287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:02.470 [2024-12-10 03:09:56.818294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:02.470 [2024-12-10 03:09:56.818301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.470 [2024-12-10 03:09:56.818307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:02.470 [2024-12-10 03:09:56.818314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:02.470 [2024-12-10 03:09:56.818320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.470 [2024-12-10 03:09:56.818327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:02.470 [2024-12-10 03:09:56.818335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:02.470 [2024-12-10 03:09:56.818342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.470 [2024-12-10 03:09:56.818349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:02.470 [2024-12-10 03:09:56.818356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:02.470 [2024-12-10 03:09:56.818363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.470 [2024-12-10 03:09:56.818370] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:02.470 [2024-12-10 03:09:56.818394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:02.470 [2024-12-10 03:09:56.818404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.470 [2024-12-10 03:09:56.818413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.470 [2024-12-10 03:09:56.818421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:02.470 [2024-12-10 03:09:56.818429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:02.470 [2024-12-10 03:09:56.818436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:02.470 [2024-12-10 03:09:56.818443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:02.470 [2024-12-10 03:09:56.818450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:02.470 [2024-12-10 03:09:56.818457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:02.470 [2024-12-10 03:09:56.818466] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:02.470 [2024-12-10 03:09:56.818475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.470 [2024-12-10 03:09:56.818486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:02.470 [2024-12-10 03:09:56.818494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:02.470 [2024-12-10 03:09:56.818502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:02.470 [2024-12-10 03:09:56.818509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:02.470 [2024-12-10 03:09:56.818517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:02.470 [2024-12-10 03:09:56.818524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:02.470 [2024-12-10 03:09:56.818531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:02.470 [2024-12-10 03:09:56.818539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:02.470 [2024-12-10 03:09:56.818546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:02.470 [2024-12-10 03:09:56.818553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:02.470 [2024-12-10 03:09:56.818560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:02.470 [2024-12-10 03:09:56.818568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:02.470 [2024-12-10 03:09:56.818575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:02.470 [2024-12-10 03:09:56.818582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:02.470 [2024-12-10 03:09:56.818589] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:02.470 [2024-12-10 03:09:56.818597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.470 [2024-12-10 03:09:56.818606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:02.470 [2024-12-10 03:09:56.818614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:02.470 [2024-12-10 03:09:56.818621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:02.470 [2024-12-10 03:09:56.818628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:02.470 [2024-12-10 03:09:56.818636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.470 [2024-12-10 03:09:56.818643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:02.470 [2024-12-10 03:09:56.818651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:22:02.470 [2024-12-10 03:09:56.818659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.732 [2024-12-10 03:09:56.850438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.732 [2024-12-10 03:09:56.850668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:02.732 [2024-12-10 03:09:56.850689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.734 ms 00:22:02.732 [2024-12-10 03:09:56.850705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.732 [2024-12-10 03:09:56.850799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.732 [2024-12-10 03:09:56.850808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:02.732 [2024-12-10 03:09:56.850818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:02.732 [2024-12-10 03:09:56.850825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.732 [2024-12-10 03:09:56.899141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.732 [2024-12-10 03:09:56.899194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:02.732 [2024-12-10 03:09:56.899208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.255 ms 00:22:02.732 [2024-12-10 03:09:56.899217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.732 [2024-12-10 03:09:56.899266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.732 [2024-12-10 03:09:56.899277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:02.732 [2024-12-10 03:09:56.899291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:02.732 [2024-12-10 03:09:56.899299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.732 [2024-12-10 03:09:56.899945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.732 [2024-12-10 03:09:56.899976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:02.732 [2024-12-10 03:09:56.899987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:22:02.732 [2024-12-10 03:09:56.899995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.732 [2024-12-10 03:09:56.900153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.732 [2024-12-10 03:09:56.900185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:02.732 [2024-12-10 03:09:56.900201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:22:02.732 [2024-12-10 03:09:56.900209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.732 [2024-12-10 03:09:56.915661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.733 [2024-12-10 03:09:56.915707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:02.733 [2024-12-10 03:09:56.915718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.430 ms 00:22:02.733 [2024-12-10 03:09:56.915727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.733 [2024-12-10 03:09:56.929869] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:02.733 [2024-12-10 03:09:56.929918] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:02.733 [2024-12-10 03:09:56.929932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.733 [2024-12-10 03:09:56.929941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:02.733 [2024-12-10 03:09:56.929951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.097 ms 00:22:02.733 [2024-12-10 03:09:56.929957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.733 [2024-12-10 03:09:56.955817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.733 [2024-12-10 03:09:56.955866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:02.733 [2024-12-10 03:09:56.955879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.806 ms 00:22:02.733 [2024-12-10 03:09:56.955887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.733 [2024-12-10 03:09:56.968781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.733 [2024-12-10 03:09:56.968838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:02.733 [2024-12-10 03:09:56.968850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.820 ms 00:22:02.733 [2024-12-10 03:09:56.968857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.733 [2024-12-10 03:09:56.981368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.733 [2024-12-10 03:09:56.981420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:02.733 [2024-12-10 03:09:56.981432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.465 ms 00:22:02.733 [2024-12-10 03:09:56.981439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.733 [2024-12-10 03:09:56.982070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.733 [2024-12-10 03:09:56.982093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:02.733 [2024-12-10 03:09:56.982106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:22:02.733 [2024-12-10 03:09:56.982113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.733 [2024-12-10 03:09:57.045960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.733 [2024-12-10 03:09:57.046024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:02.733 [2024-12-10 03:09:57.046047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.828 ms 00:22:02.733 [2024-12-10 03:09:57.046058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.733 [2024-12-10 03:09:57.057186] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:02.733 [2024-12-10 03:09:57.060195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.733 [2024-12-10 03:09:57.060239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:02.733 [2024-12-10 03:09:57.060252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.080 ms 00:22:02.733 [2024-12-10 03:09:57.060261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.733 [2024-12-10 03:09:57.060345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.733 [2024-12-10 03:09:57.060358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:02.733 [2024-12-10 03:09:57.060371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:02.733 [2024-12-10 03:09:57.060405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.733 [2024-12-10 03:09:57.060479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.733 [2024-12-10 03:09:57.060490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:02.733 [2024-12-10 03:09:57.060499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:02.733 [2024-12-10 03:09:57.060508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.733 [2024-12-10 03:09:57.060554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.733 [2024-12-10 03:09:57.060564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:02.733 [2024-12-10 03:09:57.060573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:02.733 [2024-12-10 03:09:57.060582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.733 [2024-12-10 03:09:57.060621] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:02.733 [2024-12-10 03:09:57.060632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.733 [2024-12-10 03:09:57.060640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:02.733 [2024-12-10 03:09:57.060650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:02.733 [2024-12-10 03:09:57.060658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.733 [2024-12-10 03:09:57.086584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.733 [2024-12-10 03:09:57.086632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:02.733 [2024-12-10 03:09:57.086651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.907 ms 00:22:02.733 [2024-12-10 03:09:57.086660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.733 [2024-12-10 03:09:57.086742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.733 [2024-12-10 03:09:57.086753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:02.733 [2024-12-10 03:09:57.086762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:02.733 [2024-12-10 03:09:57.086771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.733 [2024-12-10 03:09:57.088020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.443 ms, result 0 00:22:04.120  [2024-12-10T03:09:59.455Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-10T03:10:00.401Z] Copying: 23/1024 [MB] (10 MBps) [2024-12-10T03:10:01.361Z] Copying: 42/1024 [MB] (19 MBps) [2024-12-10T03:10:02.298Z] Copying: 56/1024 [MB] (13 MBps) [2024-12-10T03:10:03.683Z] Copying: 73/1024 [MB] (17 MBps) [2024-12-10T03:10:04.627Z] Copying: 84/1024 [MB] (11 MBps) [2024-12-10T03:10:05.572Z] Copying: 101/1024 [MB] (16 MBps) [2024-12-10T03:10:06.518Z] Copying: 120/1024 [MB] (18 MBps) [2024-12-10T03:10:07.463Z] Copying: 134/1024 [MB] (13 MBps) [2024-12-10T03:10:08.408Z] Copying: 147/1024 [MB] (12 MBps) [2024-12-10T03:10:09.353Z] Copying: 168/1024 [MB] (20 MBps) [2024-12-10T03:10:10.298Z] Copying: 181/1024 [MB] (13 MBps) [2024-12-10T03:10:11.684Z] Copying: 194/1024 [MB] (13 MBps) [2024-12-10T03:10:12.630Z] Copying: 212/1024 [MB] (17 MBps) [2024-12-10T03:10:13.570Z] Copying: 231/1024 [MB] (19 MBps) [2024-12-10T03:10:14.531Z] Copying: 253/1024 [MB] (21 MBps) [2024-12-10T03:10:15.476Z] Copying: 272/1024 [MB] (18 MBps) [2024-12-10T03:10:16.428Z] Copying: 286/1024 [MB] (14 MBps) [2024-12-10T03:10:17.369Z] Copying: 300/1024 [MB] (14 MBps) [2024-12-10T03:10:18.311Z] Copying: 311/1024 [MB] (11 MBps) [2024-12-10T03:10:19.699Z] Copying: 323/1024 [MB] (11 MBps) [2024-12-10T03:10:20.645Z] Copying: 358/1024 [MB] (35 MBps) [2024-12-10T03:10:21.590Z] Copying: 381/1024 [MB] (23 MBps) [2024-12-10T03:10:22.532Z] Copying: 392/1024 [MB] (10 MBps) [2024-12-10T03:10:23.475Z] Copying: 403/1024 [MB] (10 MBps) [2024-12-10T03:10:24.416Z] Copying: 417/1024 [MB] (14 MBps) [2024-12-10T03:10:25.359Z] Copying: 430/1024 [MB] (13 MBps) [2024-12-10T03:10:26.302Z] Copying: 441/1024 [MB] (10 MBps) [2024-12-10T03:10:27.312Z] Copying: 460/1024 [MB] (19 MBps) [2024-12-10T03:10:28.703Z] Copying: 480/1024 [MB] (19 MBps) [2024-12-10T03:10:29.277Z] Copying: 497/1024 [MB] (17 MBps) [2024-12-10T03:10:30.666Z] Copying: 518/1024 [MB] (20 MBps) [2024-12-10T03:10:31.609Z] Copying: 528/1024 [MB] (10 MBps) [2024-12-10T03:10:32.554Z] Copying: 539/1024 [MB] (11 MBps) [2024-12-10T03:10:33.498Z] Copying: 550/1024 [MB] (10 MBps) [2024-12-10T03:10:34.442Z] Copying: 568/1024 [MB] (18 MBps) [2024-12-10T03:10:35.383Z] Copying: 585/1024 [MB] (16 MBps) [2024-12-10T03:10:36.326Z] Copying: 598/1024 [MB] (13 MBps) [2024-12-10T03:10:37.710Z] Copying: 612/1024 [MB] (14 MBps) [2024-12-10T03:10:38.282Z] Copying: 631/1024 [MB] (18 MBps) [2024-12-10T03:10:39.732Z] Copying: 649/1024 [MB] (17 MBps) [2024-12-10T03:10:40.315Z] Copying: 673/1024 [MB] (23 MBps) [2024-12-10T03:10:41.704Z] Copying: 689/1024 [MB] (15 MBps) [2024-12-10T03:10:42.646Z] Copying: 702/1024 [MB] (13 MBps) [2024-12-10T03:10:43.589Z] Copying: 719/1024 [MB] (17 MBps) [2024-12-10T03:10:44.529Z] Copying: 737/1024 [MB] (17 MBps) [2024-12-10T03:10:45.469Z] Copying: 752/1024 [MB] (14 MBps) [2024-12-10T03:10:46.411Z] Copying: 766/1024 [MB] (14 MBps) [2024-12-10T03:10:47.353Z] Copying: 782/1024 [MB] (16 MBps) [2024-12-10T03:10:48.295Z] Copying: 799/1024 [MB] (16 MBps) [2024-12-10T03:10:49.681Z] Copying: 812/1024 [MB] (12 MBps) [2024-12-10T03:10:50.626Z] Copying: 823/1024 [MB] (11 MBps) [2024-12-10T03:10:51.570Z] Copying: 836/1024 [MB] (13 MBps) [2024-12-10T03:10:52.551Z] Copying: 847/1024 [MB] (11 MBps) [2024-12-10T03:10:53.509Z] Copying: 865/1024 [MB] (17 MBps) [2024-12-10T03:10:54.454Z] Copying: 876/1024 [MB] (10 MBps) [2024-12-10T03:10:55.397Z] Copying: 887/1024 [MB] (10 MBps) [2024-12-10T03:10:56.340Z] Copying: 898/1024 [MB] (11 MBps) [2024-12-10T03:10:57.285Z] Copying: 921/1024 [MB] (23 MBps) [2024-12-10T03:10:58.670Z] Copying: 935/1024 [MB] (13 MBps) [2024-12-10T03:10:59.613Z] Copying: 949/1024 [MB] (14 MBps) [2024-12-10T03:11:00.557Z] Copying: 962/1024 [MB] (12 MBps) [2024-12-10T03:11:01.502Z] Copying: 975/1024 [MB] (13 MBps) [2024-12-10T03:11:02.446Z] Copying: 988/1024 [MB] (13 MBps) [2024-12-10T03:11:03.389Z] Copying: 1001/1024 [MB] (12 MBps) [2024-12-10T03:11:03.651Z] Copying: 1020/1024 [MB] (18 MBps) [2024-12-10T03:11:03.913Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-10 03:11:03.839816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.525 [2024-12-10 03:11:03.840253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:09.525 [2024-12-10 03:11:03.840412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:09.525 [2024-12-10 03:11:03.840460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.525 [2024-12-10 03:11:03.840538] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:09.525 [2024-12-10 03:11:03.845478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.525 [2024-12-10 03:11:03.845652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:09.525 [2024-12-10 03:11:03.845718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.722 ms 00:23:09.525 [2024-12-10 03:11:03.845742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.525 [2024-12-10 03:11:03.846005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.525 [2024-12-10 03:11:03.846035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:09.525 [2024-12-10 03:11:03.846057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:23:09.525 [2024-12-10 03:11:03.846123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.525 [2024-12-10 03:11:03.849626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.525 [2024-12-10 03:11:03.849733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:09.525 [2024-12-10 03:11:03.849787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.471 ms 00:23:09.525 [2024-12-10 03:11:03.849818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.525 [2024-12-10 03:11:03.857767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.525 [2024-12-10 03:11:03.857918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:09.525 [2024-12-10 03:11:03.857981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.914 ms 00:23:09.525 [2024-12-10 03:11:03.858005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.525 [2024-12-10 03:11:03.885757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.525 [2024-12-10 03:11:03.885948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:09.525 [2024-12-10 03:11:03.886125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.662 ms 00:23:09.525 [2024-12-10 03:11:03.886164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.525 [2024-12-10 03:11:03.903405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.525 [2024-12-10 03:11:03.903572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:09.525 [2024-12-10 03:11:03.903633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.180 ms 00:23:09.525 [2024-12-10 03:11:03.903656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.525 [2024-12-10 03:11:03.904491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.525 [2024-12-10 03:11:03.904692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:09.525 [2024-12-10 03:11:03.904926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:23:09.525 [2024-12-10 03:11:03.905002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.787 [2024-12-10 03:11:03.936962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.787 [2024-12-10 03:11:03.937130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:09.787 [2024-12-10 03:11:03.937190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.876 ms 00:23:09.787 [2024-12-10 03:11:03.937211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.787 [2024-12-10 03:11:03.962507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.788 [2024-12-10 03:11:03.962674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:09.788 [2024-12-10 03:11:03.962734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.174 ms 00:23:09.788 [2024-12-10 03:11:03.962757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.788 [2024-12-10 03:11:03.987084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.788 [2024-12-10 03:11:03.987247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:09.788 [2024-12-10 03:11:03.987308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.203 ms 00:23:09.788 [2024-12-10 03:11:03.987330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.788 [2024-12-10 03:11:04.012163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.788 [2024-12-10 03:11:04.012348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:09.788 [2024-12-10 03:11:04.012458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.350 ms 00:23:09.788 [2024-12-10 03:11:04.012485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.788 [2024-12-10 03:11:04.012533] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:09.788 [2024-12-10 03:11:04.012573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.012611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.012641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.012670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.012750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.012781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.012811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.012841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.012869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.012898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.012972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:09.788 [2024-12-10 03:11:04.013960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.013968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.013975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.013982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.013990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.013998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:09.789 [2024-12-10 03:11:04.014149] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:09.789 [2024-12-10 03:11:04.014157] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7042e1d7-5da6-4d4d-8fc3-9955076e703c 00:23:09.789 [2024-12-10 03:11:04.014166] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:09.789 [2024-12-10 03:11:04.014174] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:09.789 [2024-12-10 03:11:04.014180] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:09.789 [2024-12-10 03:11:04.014189] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:09.789 [2024-12-10 03:11:04.014209] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:09.789 [2024-12-10 03:11:04.014217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:09.789 [2024-12-10 03:11:04.014225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:09.789 [2024-12-10 03:11:04.014232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:09.789 [2024-12-10 03:11:04.014239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:09.789 [2024-12-10 03:11:04.014248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.789 [2024-12-10 03:11:04.014256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:09.789 [2024-12-10 03:11:04.014267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.716 ms 00:23:09.789 [2024-12-10 03:11:04.014279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.789 [2024-12-10 03:11:04.027858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.789 [2024-12-10 03:11:04.028021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:09.789 [2024-12-10 03:11:04.028073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.537 ms 00:23:09.789 [2024-12-10 03:11:04.028097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.789 [2024-12-10 03:11:04.028513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.789 [2024-12-10 03:11:04.028547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:09.789 [2024-12-10 03:11:04.028660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:23:09.789 [2024-12-10 03:11:04.028683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.789 [2024-12-10 03:11:04.064525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:09.789 [2024-12-10 03:11:04.064682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:09.789 [2024-12-10 03:11:04.064742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:09.789 [2024-12-10 03:11:04.064767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.789 [2024-12-10 03:11:04.064848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:09.789 [2024-12-10 03:11:04.064874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:09.789 [2024-12-10 03:11:04.064901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:09.789 [2024-12-10 03:11:04.064922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.789 [2024-12-10 03:11:04.065022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:09.789 [2024-12-10 03:11:04.065049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:09.789 [2024-12-10 03:11:04.065072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:09.789 [2024-12-10 03:11:04.065147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.789 [2024-12-10 03:11:04.065183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:09.789 [2024-12-10 03:11:04.065206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:09.789 [2024-12-10 03:11:04.065269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:09.789 [2024-12-10 03:11:04.065287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.789 [2024-12-10 03:11:04.149999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:09.789 [2024-12-10 03:11:04.150174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:09.789 [2024-12-10 03:11:04.150235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:09.789 [2024-12-10 03:11:04.150260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.050 [2024-12-10 03:11:04.219910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.050 [2024-12-10 03:11:04.220097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:10.050 [2024-12-10 03:11:04.220164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.050 [2024-12-10 03:11:04.220189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.050 [2024-12-10 03:11:04.220270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.050 [2024-12-10 03:11:04.220295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:10.050 [2024-12-10 03:11:04.220317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.050 [2024-12-10 03:11:04.220337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.050 [2024-12-10 03:11:04.220436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.050 [2024-12-10 03:11:04.220462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:10.050 [2024-12-10 03:11:04.220484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.050 [2024-12-10 03:11:04.220562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.050 [2024-12-10 03:11:04.220701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.050 [2024-12-10 03:11:04.220798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:10.050 [2024-12-10 03:11:04.220859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.050 [2024-12-10 03:11:04.220879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.050 [2024-12-10 03:11:04.220929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.050 [2024-12-10 03:11:04.220954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:10.050 [2024-12-10 03:11:04.220974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.050 [2024-12-10 03:11:04.220993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.050 [2024-12-10 03:11:04.221051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.050 [2024-12-10 03:11:04.221076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:10.050 [2024-12-10 03:11:04.221154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.050 [2024-12-10 03:11:04.221177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.050 [2024-12-10 03:11:04.221244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.050 [2024-12-10 03:11:04.221270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:10.050 [2024-12-10 03:11:04.221291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.050 [2024-12-10 03:11:04.221312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.050 [2024-12-10 03:11:04.221486] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 381.647 ms, result 0 00:23:10.623 00:23:10.623 00:23:10.623 03:11:04 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:13.209 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:13.209 03:11:07 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:13.209 [2024-12-10 03:11:07.112854] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:23:13.209 [2024-12-10 03:11:07.113109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78927 ] 00:23:13.209 [2024-12-10 03:11:07.269613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.209 [2024-12-10 03:11:07.369216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.470 [2024-12-10 03:11:07.661194] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:13.470 [2024-12-10 03:11:07.661285] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:13.470 [2024-12-10 03:11:07.821738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.470 [2024-12-10 03:11:07.821801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:13.470 [2024-12-10 03:11:07.821816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:13.470 [2024-12-10 03:11:07.821825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.470 [2024-12-10 03:11:07.821881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.470 [2024-12-10 03:11:07.821894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:13.470 [2024-12-10 03:11:07.821903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:13.470 [2024-12-10 03:11:07.821912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.470 [2024-12-10 03:11:07.821932] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:13.470 [2024-12-10 03:11:07.822678] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:13.470 [2024-12-10 03:11:07.822700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.470 [2024-12-10 03:11:07.822708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:13.470 [2024-12-10 03:11:07.822718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:23:13.470 [2024-12-10 03:11:07.822725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.470 [2024-12-10 03:11:07.824401] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:13.470 [2024-12-10 03:11:07.838461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.470 [2024-12-10 03:11:07.838508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:13.470 [2024-12-10 03:11:07.838522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.062 ms 00:23:13.470 [2024-12-10 03:11:07.838531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.470 [2024-12-10 03:11:07.838610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.470 [2024-12-10 03:11:07.838620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:13.470 [2024-12-10 03:11:07.838629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:13.470 [2024-12-10 03:11:07.838637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.470 [2024-12-10 03:11:07.846445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.470 [2024-12-10 03:11:07.846486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:13.470 [2024-12-10 03:11:07.846497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.733 ms 00:23:13.470 [2024-12-10 03:11:07.846510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.471 [2024-12-10 03:11:07.846588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.471 [2024-12-10 03:11:07.846597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:13.471 [2024-12-10 03:11:07.846606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:13.471 [2024-12-10 03:11:07.846614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.471 [2024-12-10 03:11:07.846655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.471 [2024-12-10 03:11:07.846666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:13.471 [2024-12-10 03:11:07.846675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:13.471 [2024-12-10 03:11:07.846682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.471 [2024-12-10 03:11:07.846708] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:13.733 [2024-12-10 03:11:07.850664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.733 [2024-12-10 03:11:07.850700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:13.733 [2024-12-10 03:11:07.850714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.961 ms 00:23:13.733 [2024-12-10 03:11:07.850722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.733 [2024-12-10 03:11:07.850762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.733 [2024-12-10 03:11:07.850770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:13.733 [2024-12-10 03:11:07.850780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:13.733 [2024-12-10 03:11:07.850788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.733 [2024-12-10 03:11:07.850838] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:13.733 [2024-12-10 03:11:07.850864] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:13.733 [2024-12-10 03:11:07.850902] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:13.733 [2024-12-10 03:11:07.850920] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:13.733 [2024-12-10 03:11:07.851026] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:13.733 [2024-12-10 03:11:07.851037] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:13.733 [2024-12-10 03:11:07.851049] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:13.733 [2024-12-10 03:11:07.851059] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:13.733 [2024-12-10 03:11:07.851069] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:13.733 [2024-12-10 03:11:07.851077] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:13.733 [2024-12-10 03:11:07.851086] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:13.733 [2024-12-10 03:11:07.851096] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:13.733 [2024-12-10 03:11:07.851105] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:13.733 [2024-12-10 03:11:07.851113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.733 [2024-12-10 03:11:07.851121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:13.733 [2024-12-10 03:11:07.851129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:23:13.733 [2024-12-10 03:11:07.851136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.733 [2024-12-10 03:11:07.851219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.733 [2024-12-10 03:11:07.851227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:13.733 [2024-12-10 03:11:07.851236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:13.733 [2024-12-10 03:11:07.851244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.733 [2024-12-10 03:11:07.851349] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:13.733 [2024-12-10 03:11:07.851360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:13.733 [2024-12-10 03:11:07.851369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:13.733 [2024-12-10 03:11:07.851394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.733 [2024-12-10 03:11:07.851402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:13.733 [2024-12-10 03:11:07.851410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:13.733 [2024-12-10 03:11:07.851416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:13.733 [2024-12-10 03:11:07.851425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:13.733 [2024-12-10 03:11:07.851433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:13.733 [2024-12-10 03:11:07.851440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:13.733 [2024-12-10 03:11:07.851447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:13.733 [2024-12-10 03:11:07.851454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:13.733 [2024-12-10 03:11:07.851461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:13.733 [2024-12-10 03:11:07.851474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:13.733 [2024-12-10 03:11:07.851483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:13.733 [2024-12-10 03:11:07.851491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.733 [2024-12-10 03:11:07.851498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:13.733 [2024-12-10 03:11:07.851505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:13.733 [2024-12-10 03:11:07.851511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.733 [2024-12-10 03:11:07.851518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:13.733 [2024-12-10 03:11:07.851524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:13.733 [2024-12-10 03:11:07.851531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:13.733 [2024-12-10 03:11:07.851537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:13.733 [2024-12-10 03:11:07.851544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:13.733 [2024-12-10 03:11:07.851551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:13.733 [2024-12-10 03:11:07.851557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:13.733 [2024-12-10 03:11:07.851564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:13.733 [2024-12-10 03:11:07.851570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:13.733 [2024-12-10 03:11:07.851576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:13.733 [2024-12-10 03:11:07.851583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:13.733 [2024-12-10 03:11:07.851590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:13.733 [2024-12-10 03:11:07.851597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:13.733 [2024-12-10 03:11:07.851604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:13.733 [2024-12-10 03:11:07.851610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:13.733 [2024-12-10 03:11:07.851617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:13.733 [2024-12-10 03:11:07.851624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:13.733 [2024-12-10 03:11:07.851630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:13.733 [2024-12-10 03:11:07.851637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:13.733 [2024-12-10 03:11:07.851643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:13.734 [2024-12-10 03:11:07.851649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.734 [2024-12-10 03:11:07.851657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:13.734 [2024-12-10 03:11:07.851663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:13.734 [2024-12-10 03:11:07.851671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.734 [2024-12-10 03:11:07.851677] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:13.734 [2024-12-10 03:11:07.851685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:13.734 [2024-12-10 03:11:07.851693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:13.734 [2024-12-10 03:11:07.851702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.734 [2024-12-10 03:11:07.851710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:13.734 [2024-12-10 03:11:07.851717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:13.734 [2024-12-10 03:11:07.851724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:13.734 [2024-12-10 03:11:07.851731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:13.734 [2024-12-10 03:11:07.851737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:13.734 [2024-12-10 03:11:07.851744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:13.734 [2024-12-10 03:11:07.851753] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:13.734 [2024-12-10 03:11:07.851763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:13.734 [2024-12-10 03:11:07.851774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:13.734 [2024-12-10 03:11:07.851782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:13.734 [2024-12-10 03:11:07.851789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:13.734 [2024-12-10 03:11:07.851796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:13.734 [2024-12-10 03:11:07.851803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:13.734 [2024-12-10 03:11:07.851811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:13.734 [2024-12-10 03:11:07.851819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:13.734 [2024-12-10 03:11:07.851825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:13.734 [2024-12-10 03:11:07.851832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:13.734 [2024-12-10 03:11:07.851839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:13.734 [2024-12-10 03:11:07.851846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:13.734 [2024-12-10 03:11:07.851854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:13.734 [2024-12-10 03:11:07.851861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:13.734 [2024-12-10 03:11:07.851869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:13.734 [2024-12-10 03:11:07.851875] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:13.734 [2024-12-10 03:11:07.851884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:13.734 [2024-12-10 03:11:07.851893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:13.734 [2024-12-10 03:11:07.851900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:13.734 [2024-12-10 03:11:07.851907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:13.734 [2024-12-10 03:11:07.851926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:13.734 [2024-12-10 03:11:07.851934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:07.851942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:13.734 [2024-12-10 03:11:07.851950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:23:13.734 [2024-12-10 03:11:07.851960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.734 [2024-12-10 03:11:07.883502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:07.883549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:13.734 [2024-12-10 03:11:07.883561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.493 ms 00:23:13.734 [2024-12-10 03:11:07.883573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.734 [2024-12-10 03:11:07.883666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:07.883676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:13.734 [2024-12-10 03:11:07.883684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:13.734 [2024-12-10 03:11:07.883692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.734 [2024-12-10 03:11:07.931567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:07.931620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:13.734 [2024-12-10 03:11:07.931634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.816 ms 00:23:13.734 [2024-12-10 03:11:07.931643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.734 [2024-12-10 03:11:07.931690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:07.931701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:13.734 [2024-12-10 03:11:07.931713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:13.734 [2024-12-10 03:11:07.931721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.734 [2024-12-10 03:11:07.932302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:07.932334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:13.734 [2024-12-10 03:11:07.932344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:23:13.734 [2024-12-10 03:11:07.932352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.734 [2024-12-10 03:11:07.932533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:07.932544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:13.734 [2024-12-10 03:11:07.932560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:23:13.734 [2024-12-10 03:11:07.932568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.734 [2024-12-10 03:11:07.947958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:07.948005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:13.734 [2024-12-10 03:11:07.948016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.369 ms 00:23:13.734 [2024-12-10 03:11:07.948024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.734 [2024-12-10 03:11:07.961937] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:13.734 [2024-12-10 03:11:07.962136] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:13.734 [2024-12-10 03:11:07.962156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:07.962165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:13.734 [2024-12-10 03:11:07.962175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.026 ms 00:23:13.734 [2024-12-10 03:11:07.962182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.734 [2024-12-10 03:11:07.987964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:07.988011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:13.734 [2024-12-10 03:11:07.988023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.711 ms 00:23:13.734 [2024-12-10 03:11:07.988031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.734 [2024-12-10 03:11:08.000459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:08.000515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:13.734 [2024-12-10 03:11:08.000527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.365 ms 00:23:13.734 [2024-12-10 03:11:08.000534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.734 [2024-12-10 03:11:08.012776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:08.012819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:13.734 [2024-12-10 03:11:08.012831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.196 ms 00:23:13.734 [2024-12-10 03:11:08.012839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.734 [2024-12-10 03:11:08.013505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:08.013529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:13.734 [2024-12-10 03:11:08.013542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:23:13.734 [2024-12-10 03:11:08.013550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.734 [2024-12-10 03:11:08.079362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:08.079624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:13.734 [2024-12-10 03:11:08.079657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.792 ms 00:23:13.734 [2024-12-10 03:11:08.079667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.734 [2024-12-10 03:11:08.090647] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:13.734 [2024-12-10 03:11:08.093627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:08.093673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:13.734 [2024-12-10 03:11:08.093685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.913 ms 00:23:13.734 [2024-12-10 03:11:08.093694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.734 [2024-12-10 03:11:08.093774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.734 [2024-12-10 03:11:08.093785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:13.734 [2024-12-10 03:11:08.093799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:13.734 [2024-12-10 03:11:08.093807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.735 [2024-12-10 03:11:08.093878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.735 [2024-12-10 03:11:08.093888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:13.735 [2024-12-10 03:11:08.093897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:13.735 [2024-12-10 03:11:08.093906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.735 [2024-12-10 03:11:08.093926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.735 [2024-12-10 03:11:08.093935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:13.735 [2024-12-10 03:11:08.093944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:13.735 [2024-12-10 03:11:08.093952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.735 [2024-12-10 03:11:08.093990] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:13.735 [2024-12-10 03:11:08.094001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.735 [2024-12-10 03:11:08.094009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:13.735 [2024-12-10 03:11:08.094018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:13.735 [2024-12-10 03:11:08.094026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.996 [2024-12-10 03:11:08.119822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.996 [2024-12-10 03:11:08.120012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:13.996 [2024-12-10 03:11:08.120085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.777 ms 00:23:13.996 [2024-12-10 03:11:08.120110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.996 [2024-12-10 03:11:08.120279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.996 [2024-12-10 03:11:08.120434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:13.996 [2024-12-10 03:11:08.120465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:13.996 [2024-12-10 03:11:08.120486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.996 [2024-12-10 03:11:08.121777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.538 ms, result 0 00:23:14.940  [2024-12-10T03:11:10.271Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-10T03:11:11.216Z] Copying: 33/1024 [MB] (17 MBps) [2024-12-10T03:11:12.161Z] Copying: 46/1024 [MB] (13 MBps) [2024-12-10T03:11:13.548Z] Copying: 57/1024 [MB] (10 MBps) [2024-12-10T03:11:14.491Z] Copying: 68/1024 [MB] (10 MBps) [2024-12-10T03:11:15.433Z] Copying: 80144/1048576 [kB] (10180 kBps) [2024-12-10T03:11:16.377Z] Copying: 92/1024 [MB] (14 MBps) [2024-12-10T03:11:17.322Z] Copying: 119/1024 [MB] (27 MBps) [2024-12-10T03:11:18.266Z] Copying: 132/1024 [MB] (12 MBps) [2024-12-10T03:11:19.239Z] Copying: 144/1024 [MB] (12 MBps) [2024-12-10T03:11:20.184Z] Copying: 160/1024 [MB] (16 MBps) [2024-12-10T03:11:21.571Z] Copying: 172/1024 [MB] (11 MBps) [2024-12-10T03:11:22.145Z] Copying: 189/1024 [MB] (17 MBps) [2024-12-10T03:11:23.531Z] Copying: 208/1024 [MB] (19 MBps) [2024-12-10T03:11:24.477Z] Copying: 224/1024 [MB] (15 MBps) [2024-12-10T03:11:25.416Z] Copying: 236/1024 [MB] (12 MBps) [2024-12-10T03:11:26.358Z] Copying: 253/1024 [MB] (16 MBps) [2024-12-10T03:11:27.299Z] Copying: 271/1024 [MB] (17 MBps) [2024-12-10T03:11:28.239Z] Copying: 291/1024 [MB] (20 MBps) [2024-12-10T03:11:29.182Z] Copying: 308/1024 [MB] (17 MBps) [2024-12-10T03:11:30.569Z] Copying: 327/1024 [MB] (18 MBps) [2024-12-10T03:11:31.141Z] Copying: 342/1024 [MB] (14 MBps) [2024-12-10T03:11:32.145Z] Copying: 354/1024 [MB] (12 MBps) [2024-12-10T03:11:33.530Z] Copying: 367/1024 [MB] (12 MBps) [2024-12-10T03:11:34.484Z] Copying: 378/1024 [MB] (10 MBps) [2024-12-10T03:11:35.424Z] Copying: 396/1024 [MB] (18 MBps) [2024-12-10T03:11:36.366Z] Copying: 415/1024 [MB] (18 MBps) [2024-12-10T03:11:37.309Z] Copying: 435/1024 [MB] (19 MBps) [2024-12-10T03:11:38.250Z] Copying: 447/1024 [MB] (12 MBps) [2024-12-10T03:11:39.194Z] Copying: 457/1024 [MB] (10 MBps) [2024-12-10T03:11:40.580Z] Copying: 468/1024 [MB] (10 MBps) [2024-12-10T03:11:41.152Z] Copying: 479/1024 [MB] (10 MBps) [2024-12-10T03:11:42.542Z] Copying: 489/1024 [MB] (10 MBps) [2024-12-10T03:11:43.491Z] Copying: 500/1024 [MB] (11 MBps) [2024-12-10T03:11:44.434Z] Copying: 511/1024 [MB] (10 MBps) [2024-12-10T03:11:45.379Z] Copying: 521/1024 [MB] (10 MBps) [2024-12-10T03:11:46.322Z] Copying: 544588/1048576 [kB] (10236 kBps) [2024-12-10T03:11:47.265Z] Copying: 543/1024 [MB] (11 MBps) [2024-12-10T03:11:48.209Z] Copying: 560/1024 [MB] (16 MBps) [2024-12-10T03:11:49.155Z] Copying: 575/1024 [MB] (15 MBps) [2024-12-10T03:11:50.543Z] Copying: 598/1024 [MB] (23 MBps) [2024-12-10T03:11:51.488Z] Copying: 637/1024 [MB] (38 MBps) [2024-12-10T03:11:52.433Z] Copying: 649/1024 [MB] (12 MBps) [2024-12-10T03:11:53.378Z] Copying: 669/1024 [MB] (20 MBps) [2024-12-10T03:11:54.353Z] Copying: 688/1024 [MB] (18 MBps) [2024-12-10T03:11:55.298Z] Copying: 702/1024 [MB] (13 MBps) [2024-12-10T03:11:56.247Z] Copying: 716/1024 [MB] (14 MBps) [2024-12-10T03:11:57.192Z] Copying: 729/1024 [MB] (12 MBps) [2024-12-10T03:11:58.581Z] Copying: 741/1024 [MB] (11 MBps) [2024-12-10T03:11:59.155Z] Copying: 769080/1048576 [kB] (10216 kBps) [2024-12-10T03:12:00.542Z] Copying: 764/1024 [MB] (13 MBps) [2024-12-10T03:12:01.485Z] Copying: 778/1024 [MB] (13 MBps) [2024-12-10T03:12:02.429Z] Copying: 796/1024 [MB] (18 MBps) [2024-12-10T03:12:03.372Z] Copying: 816/1024 [MB] (19 MBps) [2024-12-10T03:12:04.318Z] Copying: 828/1024 [MB] (12 MBps) [2024-12-10T03:12:05.265Z] Copying: 838/1024 [MB] (10 MBps) [2024-12-10T03:12:06.231Z] Copying: 855/1024 [MB] (16 MBps) [2024-12-10T03:12:07.176Z] Copying: 868/1024 [MB] (13 MBps) [2024-12-10T03:12:08.565Z] Copying: 878/1024 [MB] (10 MBps) [2024-12-10T03:12:09.511Z] Copying: 893/1024 [MB] (14 MBps) [2024-12-10T03:12:10.457Z] Copying: 904/1024 [MB] (11 MBps) [2024-12-10T03:12:11.400Z] Copying: 918/1024 [MB] (14 MBps) [2024-12-10T03:12:12.345Z] Copying: 934/1024 [MB] (15 MBps) [2024-12-10T03:12:13.289Z] Copying: 950/1024 [MB] (16 MBps) [2024-12-10T03:12:14.232Z] Copying: 962/1024 [MB] (11 MBps) [2024-12-10T03:12:15.177Z] Copying: 980/1024 [MB] (18 MBps) [2024-12-10T03:12:16.565Z] Copying: 992/1024 [MB] (11 MBps) [2024-12-10T03:12:17.202Z] Copying: 1003/1024 [MB] (10 MBps) [2024-12-10T03:12:18.586Z] Copying: 1020/1024 [MB] (17 MBps) [2024-12-10T03:12:18.586Z] Copying: 1048400/1048576 [kB] (3492 kBps) [2024-12-10T03:12:18.586Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-10 03:12:18.310765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.198 [2024-12-10 03:12:18.311056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:24.198 [2024-12-10 03:12:18.311098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:24.198 [2024-12-10 03:12:18.311108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.198 [2024-12-10 03:12:18.313190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:24.198 [2024-12-10 03:12:18.318694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.198 [2024-12-10 03:12:18.318738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:24.198 [2024-12-10 03:12:18.318753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.454 ms 00:24:24.198 [2024-12-10 03:12:18.318762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.198 [2024-12-10 03:12:18.330939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.198 [2024-12-10 03:12:18.330981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:24.198 [2024-12-10 03:12:18.330994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.105 ms 00:24:24.198 [2024-12-10 03:12:18.331014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.198 [2024-12-10 03:12:18.354388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.198 [2024-12-10 03:12:18.354445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:24.198 [2024-12-10 03:12:18.354460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.353 ms 00:24:24.198 [2024-12-10 03:12:18.354470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.198 [2024-12-10 03:12:18.360629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.198 [2024-12-10 03:12:18.360661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:24.198 [2024-12-10 03:12:18.360673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.125 ms 00:24:24.198 [2024-12-10 03:12:18.360693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.198 [2024-12-10 03:12:18.390118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.198 [2024-12-10 03:12:18.390366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:24.198 [2024-12-10 03:12:18.390421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.340 ms 00:24:24.198 [2024-12-10 03:12:18.390432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.198 [2024-12-10 03:12:18.413471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.198 [2024-12-10 03:12:18.413693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:24.198 [2024-12-10 03:12:18.413722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.953 ms 00:24:24.198 [2024-12-10 03:12:18.413732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.198 [2024-12-10 03:12:18.572950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.198 [2024-12-10 03:12:18.573152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:24.198 [2024-12-10 03:12:18.573175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 159.078 ms 00:24:24.198 [2024-12-10 03:12:18.573186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.462 [2024-12-10 03:12:18.600303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.462 [2024-12-10 03:12:18.600514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:24.462 [2024-12-10 03:12:18.600536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.093 ms 00:24:24.462 [2024-12-10 03:12:18.600546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.462 [2024-12-10 03:12:18.626715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.462 [2024-12-10 03:12:18.626770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:24.462 [2024-12-10 03:12:18.626786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.831 ms 00:24:24.462 [2024-12-10 03:12:18.626795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.462 [2024-12-10 03:12:18.651956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.462 [2024-12-10 03:12:18.652154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:24.462 [2024-12-10 03:12:18.652176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.111 ms 00:24:24.462 [2024-12-10 03:12:18.652184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.462 [2024-12-10 03:12:18.677480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.462 [2024-12-10 03:12:18.677527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:24.462 [2024-12-10 03:12:18.677539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.207 ms 00:24:24.462 [2024-12-10 03:12:18.677546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.462 [2024-12-10 03:12:18.677592] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:24.462 [2024-12-10 03:12:18.677609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 101888 / 261120 wr_cnt: 1 state: open 00:24:24.462 [2024-12-10 03:12:18.677621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.677994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:24.462 [2024-12-10 03:12:18.678118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:24.463 [2024-12-10 03:12:18.678433] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:24.463 [2024-12-10 03:12:18.678441] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7042e1d7-5da6-4d4d-8fc3-9955076e703c 00:24:24.463 [2024-12-10 03:12:18.678451] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 101888 00:24:24.463 [2024-12-10 03:12:18.678460] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 102848 00:24:24.463 [2024-12-10 03:12:18.678467] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 101888 00:24:24.463 [2024-12-10 03:12:18.678477] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:24:24.463 [2024-12-10 03:12:18.678498] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:24.463 [2024-12-10 03:12:18.678526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:24.463 [2024-12-10 03:12:18.678535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:24.463 [2024-12-10 03:12:18.678542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:24.463 [2024-12-10 03:12:18.678549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:24.463 [2024-12-10 03:12:18.678565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.463 [2024-12-10 03:12:18.678574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:24.463 [2024-12-10 03:12:18.678583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:24:24.463 [2024-12-10 03:12:18.678592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.463 [2024-12-10 03:12:18.692128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.463 [2024-12-10 03:12:18.692173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:24.463 [2024-12-10 03:12:18.692191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.516 ms 00:24:24.463 [2024-12-10 03:12:18.692199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.463 [2024-12-10 03:12:18.692645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.463 [2024-12-10 03:12:18.692657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:24.463 [2024-12-10 03:12:18.692666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:24:24.463 [2024-12-10 03:12:18.692674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.463 [2024-12-10 03:12:18.729738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.463 [2024-12-10 03:12:18.729929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:24.463 [2024-12-10 03:12:18.729950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.463 [2024-12-10 03:12:18.729959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.463 [2024-12-10 03:12:18.730034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.463 [2024-12-10 03:12:18.730043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:24.463 [2024-12-10 03:12:18.730052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.463 [2024-12-10 03:12:18.730060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.463 [2024-12-10 03:12:18.730125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.463 [2024-12-10 03:12:18.730143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:24.463 [2024-12-10 03:12:18.730152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.463 [2024-12-10 03:12:18.730159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.463 [2024-12-10 03:12:18.730175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.463 [2024-12-10 03:12:18.730183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:24.463 [2024-12-10 03:12:18.730191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.463 [2024-12-10 03:12:18.730199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.463 [2024-12-10 03:12:18.815117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.463 [2024-12-10 03:12:18.815180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:24.463 [2024-12-10 03:12:18.815193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.463 [2024-12-10 03:12:18.815202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.726 [2024-12-10 03:12:18.884487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.726 [2024-12-10 03:12:18.884701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:24.726 [2024-12-10 03:12:18.884722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.726 [2024-12-10 03:12:18.884732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.726 [2024-12-10 03:12:18.884823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.726 [2024-12-10 03:12:18.884836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:24.726 [2024-12-10 03:12:18.884845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.726 [2024-12-10 03:12:18.884861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.726 [2024-12-10 03:12:18.884900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.726 [2024-12-10 03:12:18.884910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:24.726 [2024-12-10 03:12:18.884919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.726 [2024-12-10 03:12:18.884928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.726 [2024-12-10 03:12:18.885034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.726 [2024-12-10 03:12:18.885045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:24.726 [2024-12-10 03:12:18.885054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.726 [2024-12-10 03:12:18.885066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.726 [2024-12-10 03:12:18.885099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.726 [2024-12-10 03:12:18.885109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:24.726 [2024-12-10 03:12:18.885118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.726 [2024-12-10 03:12:18.885126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.726 [2024-12-10 03:12:18.885173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.726 [2024-12-10 03:12:18.885183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:24.726 [2024-12-10 03:12:18.885191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.726 [2024-12-10 03:12:18.885199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.726 [2024-12-10 03:12:18.885254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.726 [2024-12-10 03:12:18.885265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:24.726 [2024-12-10 03:12:18.885275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.727 [2024-12-10 03:12:18.885283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.727 [2024-12-10 03:12:18.885455] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 576.725 ms, result 0 00:24:26.114 00:24:26.114 00:24:26.114 03:12:20 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:26.114 [2024-12-10 03:12:20.229615] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:24:26.114 [2024-12-10 03:12:20.229762] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79670 ] 00:24:26.114 [2024-12-10 03:12:20.394868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.374 [2024-12-10 03:12:20.518699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.635 [2024-12-10 03:12:20.814948] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:26.635 [2024-12-10 03:12:20.815039] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:26.635 [2024-12-10 03:12:20.976747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.635 [2024-12-10 03:12:20.976813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:26.635 [2024-12-10 03:12:20.976829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:26.635 [2024-12-10 03:12:20.976838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.635 [2024-12-10 03:12:20.976894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.635 [2024-12-10 03:12:20.976908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:26.635 [2024-12-10 03:12:20.976916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:26.635 [2024-12-10 03:12:20.976924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.635 [2024-12-10 03:12:20.976945] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:26.635 [2024-12-10 03:12:20.977685] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:26.635 [2024-12-10 03:12:20.977706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.635 [2024-12-10 03:12:20.977715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:26.635 [2024-12-10 03:12:20.977725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:24:26.635 [2024-12-10 03:12:20.977733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.635 [2024-12-10 03:12:20.979449] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:26.635 [2024-12-10 03:12:20.993882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.635 [2024-12-10 03:12:20.993933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:26.635 [2024-12-10 03:12:20.993946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.435 ms 00:24:26.635 [2024-12-10 03:12:20.993955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.635 [2024-12-10 03:12:20.994041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.635 [2024-12-10 03:12:20.994051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:26.635 [2024-12-10 03:12:20.994060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:26.635 [2024-12-10 03:12:20.994068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.635 [2024-12-10 03:12:21.002364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.635 [2024-12-10 03:12:21.002427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:26.635 [2024-12-10 03:12:21.002438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.215 ms 00:24:26.635 [2024-12-10 03:12:21.002451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.635 [2024-12-10 03:12:21.002531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.635 [2024-12-10 03:12:21.002541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:26.635 [2024-12-10 03:12:21.002550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:26.635 [2024-12-10 03:12:21.002558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.636 [2024-12-10 03:12:21.002604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.636 [2024-12-10 03:12:21.002615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:26.636 [2024-12-10 03:12:21.002623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:26.636 [2024-12-10 03:12:21.002631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.636 [2024-12-10 03:12:21.002659] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:26.636 [2024-12-10 03:12:21.006967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.636 [2024-12-10 03:12:21.007153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:26.636 [2024-12-10 03:12:21.007181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.313 ms 00:24:26.636 [2024-12-10 03:12:21.007190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.636 [2024-12-10 03:12:21.007236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.636 [2024-12-10 03:12:21.007246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:26.636 [2024-12-10 03:12:21.007255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:26.636 [2024-12-10 03:12:21.007263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.636 [2024-12-10 03:12:21.007319] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:26.636 [2024-12-10 03:12:21.007346] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:26.636 [2024-12-10 03:12:21.007407] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:26.636 [2024-12-10 03:12:21.007429] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:26.636 [2024-12-10 03:12:21.007537] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:26.636 [2024-12-10 03:12:21.007549] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:26.636 [2024-12-10 03:12:21.007561] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:26.636 [2024-12-10 03:12:21.007571] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:26.636 [2024-12-10 03:12:21.007581] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:26.636 [2024-12-10 03:12:21.007589] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:26.636 [2024-12-10 03:12:21.007597] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:26.636 [2024-12-10 03:12:21.007607] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:26.636 [2024-12-10 03:12:21.007615] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:26.636 [2024-12-10 03:12:21.007623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.636 [2024-12-10 03:12:21.007632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:26.636 [2024-12-10 03:12:21.007641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:24:26.636 [2024-12-10 03:12:21.007648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.636 [2024-12-10 03:12:21.007734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.636 [2024-12-10 03:12:21.007743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:26.636 [2024-12-10 03:12:21.007751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:26.636 [2024-12-10 03:12:21.007759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.636 [2024-12-10 03:12:21.007866] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:26.636 [2024-12-10 03:12:21.007877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:26.636 [2024-12-10 03:12:21.007886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:26.636 [2024-12-10 03:12:21.007895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.636 [2024-12-10 03:12:21.007903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:26.636 [2024-12-10 03:12:21.007921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:26.636 [2024-12-10 03:12:21.007929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:26.636 [2024-12-10 03:12:21.007936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:26.636 [2024-12-10 03:12:21.007946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:26.636 [2024-12-10 03:12:21.007953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:26.636 [2024-12-10 03:12:21.007960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:26.636 [2024-12-10 03:12:21.007967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:26.636 [2024-12-10 03:12:21.007974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:26.636 [2024-12-10 03:12:21.007990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:26.636 [2024-12-10 03:12:21.007998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:26.636 [2024-12-10 03:12:21.008005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.636 [2024-12-10 03:12:21.008012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:26.636 [2024-12-10 03:12:21.008020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:26.636 [2024-12-10 03:12:21.008026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.636 [2024-12-10 03:12:21.008033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:26.636 [2024-12-10 03:12:21.008040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:26.636 [2024-12-10 03:12:21.008047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.636 [2024-12-10 03:12:21.008054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:26.636 [2024-12-10 03:12:21.008061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:26.636 [2024-12-10 03:12:21.008068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.636 [2024-12-10 03:12:21.008075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:26.636 [2024-12-10 03:12:21.008082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:26.636 [2024-12-10 03:12:21.008088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.636 [2024-12-10 03:12:21.008095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:26.636 [2024-12-10 03:12:21.008102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:26.636 [2024-12-10 03:12:21.008109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.636 [2024-12-10 03:12:21.008116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:26.636 [2024-12-10 03:12:21.008123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:26.636 [2024-12-10 03:12:21.008130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:26.636 [2024-12-10 03:12:21.008137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:26.636 [2024-12-10 03:12:21.008144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:26.636 [2024-12-10 03:12:21.008150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:26.636 [2024-12-10 03:12:21.008156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:26.636 [2024-12-10 03:12:21.008163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:26.636 [2024-12-10 03:12:21.008170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.636 [2024-12-10 03:12:21.008177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:26.636 [2024-12-10 03:12:21.008184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:26.636 [2024-12-10 03:12:21.008191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.636 [2024-12-10 03:12:21.008198] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:26.636 [2024-12-10 03:12:21.008206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:26.636 [2024-12-10 03:12:21.008216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:26.636 [2024-12-10 03:12:21.008225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.636 [2024-12-10 03:12:21.008233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:26.636 [2024-12-10 03:12:21.008240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:26.636 [2024-12-10 03:12:21.008246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:26.636 [2024-12-10 03:12:21.008254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:26.636 [2024-12-10 03:12:21.008260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:26.636 [2024-12-10 03:12:21.008267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:26.636 [2024-12-10 03:12:21.008275] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:26.636 [2024-12-10 03:12:21.008285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:26.636 [2024-12-10 03:12:21.008297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:26.636 [2024-12-10 03:12:21.008305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:26.636 [2024-12-10 03:12:21.008312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:26.636 [2024-12-10 03:12:21.008319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:26.636 [2024-12-10 03:12:21.008327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:26.636 [2024-12-10 03:12:21.008334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:26.636 [2024-12-10 03:12:21.008341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:26.636 [2024-12-10 03:12:21.008348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:26.636 [2024-12-10 03:12:21.008355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:26.636 [2024-12-10 03:12:21.008362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:26.636 [2024-12-10 03:12:21.008369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:26.636 [2024-12-10 03:12:21.008391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:26.637 [2024-12-10 03:12:21.008398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:26.637 [2024-12-10 03:12:21.008406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:26.637 [2024-12-10 03:12:21.008413] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:26.637 [2024-12-10 03:12:21.008422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:26.637 [2024-12-10 03:12:21.008432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:26.637 [2024-12-10 03:12:21.008440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:26.637 [2024-12-10 03:12:21.008447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:26.637 [2024-12-10 03:12:21.008455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:26.637 [2024-12-10 03:12:21.008462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.637 [2024-12-10 03:12:21.008470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:26.637 [2024-12-10 03:12:21.008480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:24:26.637 [2024-12-10 03:12:21.008488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.040825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.040877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:26.899 [2024-12-10 03:12:21.040890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.287 ms 00:24:26.899 [2024-12-10 03:12:21.040904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.040997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.041006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:26.899 [2024-12-10 03:12:21.041015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:26.899 [2024-12-10 03:12:21.041023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.085862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.085919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:26.899 [2024-12-10 03:12:21.085933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.774 ms 00:24:26.899 [2024-12-10 03:12:21.085942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.085993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.086003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.899 [2024-12-10 03:12:21.086017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:26.899 [2024-12-10 03:12:21.086025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.086663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.086688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.899 [2024-12-10 03:12:21.086699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:24:26.899 [2024-12-10 03:12:21.086707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.086869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.086882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.899 [2024-12-10 03:12:21.086897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:24:26.899 [2024-12-10 03:12:21.086906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.102649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.102698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.899 [2024-12-10 03:12:21.102710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.722 ms 00:24:26.899 [2024-12-10 03:12:21.102719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.117289] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:26.899 [2024-12-10 03:12:21.117516] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:26.899 [2024-12-10 03:12:21.117537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.117546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:26.899 [2024-12-10 03:12:21.117555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.707 ms 00:24:26.899 [2024-12-10 03:12:21.117563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.143746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.143797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:26.899 [2024-12-10 03:12:21.143810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.135 ms 00:24:26.899 [2024-12-10 03:12:21.143818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.157223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.157436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:26.899 [2024-12-10 03:12:21.157457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.345 ms 00:24:26.899 [2024-12-10 03:12:21.157465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.170268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.170317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:26.899 [2024-12-10 03:12:21.170329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.688 ms 00:24:26.899 [2024-12-10 03:12:21.170336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.171005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.171040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:26.899 [2024-12-10 03:12:21.171055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:24:26.899 [2024-12-10 03:12:21.171063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.237668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.237727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:26.899 [2024-12-10 03:12:21.237751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.584 ms 00:24:26.899 [2024-12-10 03:12:21.237759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.249077] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:26.899 [2024-12-10 03:12:21.252619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.252667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:26.899 [2024-12-10 03:12:21.252680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.800 ms 00:24:26.899 [2024-12-10 03:12:21.252688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.252785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.252797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:26.899 [2024-12-10 03:12:21.252810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:26.899 [2024-12-10 03:12:21.252819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.254573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.254624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:26.899 [2024-12-10 03:12:21.254635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.714 ms 00:24:26.899 [2024-12-10 03:12:21.254643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.254672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.254681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:26.899 [2024-12-10 03:12:21.254691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:26.899 [2024-12-10 03:12:21.254699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.899 [2024-12-10 03:12:21.254745] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:26.899 [2024-12-10 03:12:21.254756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.899 [2024-12-10 03:12:21.254764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:26.899 [2024-12-10 03:12:21.254773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:26.899 [2024-12-10 03:12:21.254781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.161 [2024-12-10 03:12:21.281298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.161 [2024-12-10 03:12:21.281350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:27.161 [2024-12-10 03:12:21.281370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.499 ms 00:24:27.161 [2024-12-10 03:12:21.281394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.161 [2024-12-10 03:12:21.281485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.161 [2024-12-10 03:12:21.281495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:27.161 [2024-12-10 03:12:21.281504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:27.161 [2024-12-10 03:12:21.281512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.161 [2024-12-10 03:12:21.282798] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.529 ms, result 0 00:24:28.105  [2024-12-10T03:12:23.881Z] Copying: 8156/1048576 [kB] (8156 kBps) [2024-12-10T03:12:24.823Z] Copying: 34/1024 [MB] (26 MBps) [2024-12-10T03:12:25.765Z] Copying: 53/1024 [MB] (18 MBps) [2024-12-10T03:12:26.709Z] Copying: 67/1024 [MB] (14 MBps) [2024-12-10T03:12:27.650Z] Copying: 84/1024 [MB] (16 MBps) [2024-12-10T03:12:28.645Z] Copying: 105/1024 [MB] (21 MBps) [2024-12-10T03:12:29.588Z] Copying: 121/1024 [MB] (16 MBps) [2024-12-10T03:12:30.531Z] Copying: 137/1024 [MB] (15 MBps) [2024-12-10T03:12:31.913Z] Copying: 150/1024 [MB] (13 MBps) [2024-12-10T03:12:32.487Z] Copying: 178/1024 [MB] (27 MBps) [2024-12-10T03:12:33.874Z] Copying: 191/1024 [MB] (12 MBps) [2024-12-10T03:12:34.817Z] Copying: 203/1024 [MB] (12 MBps) [2024-12-10T03:12:35.760Z] Copying: 217/1024 [MB] (13 MBps) [2024-12-10T03:12:36.703Z] Copying: 231/1024 [MB] (14 MBps) [2024-12-10T03:12:37.644Z] Copying: 242/1024 [MB] (10 MBps) [2024-12-10T03:12:38.588Z] Copying: 252/1024 [MB] (10 MBps) [2024-12-10T03:12:39.533Z] Copying: 264/1024 [MB] (12 MBps) [2024-12-10T03:12:40.520Z] Copying: 275/1024 [MB] (10 MBps) [2024-12-10T03:12:41.907Z] Copying: 291/1024 [MB] (15 MBps) [2024-12-10T03:12:42.479Z] Copying: 310/1024 [MB] (18 MBps) [2024-12-10T03:12:43.864Z] Copying: 331/1024 [MB] (21 MBps) [2024-12-10T03:12:44.807Z] Copying: 353/1024 [MB] (21 MBps) [2024-12-10T03:12:45.749Z] Copying: 375/1024 [MB] (22 MBps) [2024-12-10T03:12:46.693Z] Copying: 395/1024 [MB] (19 MBps) [2024-12-10T03:12:47.635Z] Copying: 410/1024 [MB] (15 MBps) [2024-12-10T03:12:48.579Z] Copying: 423/1024 [MB] (13 MBps) [2024-12-10T03:12:49.524Z] Copying: 440/1024 [MB] (16 MBps) [2024-12-10T03:12:50.911Z] Copying: 453/1024 [MB] (13 MBps) [2024-12-10T03:12:51.493Z] Copying: 464/1024 [MB] (10 MBps) [2024-12-10T03:12:52.880Z] Copying: 485588/1048576 [kB] (10228 kBps) [2024-12-10T03:12:53.823Z] Copying: 484/1024 [MB] (10 MBps) [2024-12-10T03:12:54.764Z] Copying: 494/1024 [MB] (10 MBps) [2024-12-10T03:12:55.707Z] Copying: 509/1024 [MB] (14 MBps) [2024-12-10T03:12:56.648Z] Copying: 520/1024 [MB] (10 MBps) [2024-12-10T03:12:57.590Z] Copying: 530/1024 [MB] (10 MBps) [2024-12-10T03:12:58.532Z] Copying: 544/1024 [MB] (13 MBps) [2024-12-10T03:12:59.921Z] Copying: 556/1024 [MB] (12 MBps) [2024-12-10T03:13:00.496Z] Copying: 569/1024 [MB] (12 MBps) [2024-12-10T03:13:01.883Z] Copying: 580/1024 [MB] (11 MBps) [2024-12-10T03:13:02.840Z] Copying: 592/1024 [MB] (11 MBps) [2024-12-10T03:13:03.847Z] Copying: 608/1024 [MB] (16 MBps) [2024-12-10T03:13:04.792Z] Copying: 622/1024 [MB] (13 MBps) [2024-12-10T03:13:05.736Z] Copying: 634/1024 [MB] (12 MBps) [2024-12-10T03:13:06.679Z] Copying: 648/1024 [MB] (13 MBps) [2024-12-10T03:13:07.623Z] Copying: 666/1024 [MB] (17 MBps) [2024-12-10T03:13:08.565Z] Copying: 684/1024 [MB] (17 MBps) [2024-12-10T03:13:09.510Z] Copying: 698/1024 [MB] (14 MBps) [2024-12-10T03:13:10.888Z] Copying: 713/1024 [MB] (14 MBps) [2024-12-10T03:13:11.830Z] Copying: 738/1024 [MB] (25 MBps) [2024-12-10T03:13:12.772Z] Copying: 753/1024 [MB] (14 MBps) [2024-12-10T03:13:13.714Z] Copying: 770/1024 [MB] (16 MBps) [2024-12-10T03:13:14.655Z] Copying: 784/1024 [MB] (13 MBps) [2024-12-10T03:13:15.599Z] Copying: 797/1024 [MB] (13 MBps) [2024-12-10T03:13:16.543Z] Copying: 808/1024 [MB] (10 MBps) [2024-12-10T03:13:17.488Z] Copying: 820/1024 [MB] (12 MBps) [2024-12-10T03:13:18.869Z] Copying: 831/1024 [MB] (10 MBps) [2024-12-10T03:13:19.804Z] Copying: 843/1024 [MB] (12 MBps) [2024-12-10T03:13:20.738Z] Copying: 857/1024 [MB] (13 MBps) [2024-12-10T03:13:21.673Z] Copying: 869/1024 [MB] (12 MBps) [2024-12-10T03:13:22.607Z] Copying: 881/1024 [MB] (12 MBps) [2024-12-10T03:13:23.540Z] Copying: 897/1024 [MB] (15 MBps) [2024-12-10T03:13:24.913Z] Copying: 916/1024 [MB] (19 MBps) [2024-12-10T03:13:25.478Z] Copying: 933/1024 [MB] (16 MBps) [2024-12-10T03:13:26.889Z] Copying: 952/1024 [MB] (19 MBps) [2024-12-10T03:13:27.481Z] Copying: 975/1024 [MB] (22 MBps) [2024-12-10T03:13:28.855Z] Copying: 995/1024 [MB] (19 MBps) [2024-12-10T03:13:29.112Z] Copying: 1011/1024 [MB] (16 MBps) [2024-12-10T03:13:29.112Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-10 03:13:28.991545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.724 [2024-12-10 03:13:28.991595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:34.724 [2024-12-10 03:13:28.991613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:34.724 [2024-12-10 03:13:28.991621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.724 [2024-12-10 03:13:28.991642] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:34.724 [2024-12-10 03:13:28.994252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.724 [2024-12-10 03:13:28.994386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:34.724 [2024-12-10 03:13:28.994403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:25:34.724 [2024-12-10 03:13:28.994411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.724 [2024-12-10 03:13:28.994632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.724 [2024-12-10 03:13:28.994642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:34.724 [2024-12-10 03:13:28.994651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:25:34.724 [2024-12-10 03:13:28.994662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.724 [2024-12-10 03:13:28.999960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.724 [2024-12-10 03:13:29.000342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:34.724 [2024-12-10 03:13:29.000358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.284 ms 00:25:34.724 [2024-12-10 03:13:29.000365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.724 [2024-12-10 03:13:29.007659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.724 [2024-12-10 03:13:29.007685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:34.724 [2024-12-10 03:13:29.007695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.255 ms 00:25:34.724 [2024-12-10 03:13:29.007706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.724 [2024-12-10 03:13:29.031732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.724 [2024-12-10 03:13:29.031773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:34.724 [2024-12-10 03:13:29.031783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.982 ms 00:25:34.724 [2024-12-10 03:13:29.031790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.724 [2024-12-10 03:13:29.046075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.724 [2024-12-10 03:13:29.046201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:34.724 [2024-12-10 03:13:29.046219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.254 ms 00:25:34.724 [2024-12-10 03:13:29.046227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.983 [2024-12-10 03:13:29.112506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.983 [2024-12-10 03:13:29.112617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:34.983 [2024-12-10 03:13:29.112633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.247 ms 00:25:34.983 [2024-12-10 03:13:29.112640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.983 [2024-12-10 03:13:29.135255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.983 [2024-12-10 03:13:29.135286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:34.983 [2024-12-10 03:13:29.135296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.600 ms 00:25:34.983 [2024-12-10 03:13:29.135303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.983 [2024-12-10 03:13:29.158440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.983 [2024-12-10 03:13:29.158469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:34.983 [2024-12-10 03:13:29.158479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.108 ms 00:25:34.983 [2024-12-10 03:13:29.158486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.983 [2024-12-10 03:13:29.180937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.983 [2024-12-10 03:13:29.180965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:34.983 [2024-12-10 03:13:29.180975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.421 ms 00:25:34.983 [2024-12-10 03:13:29.180982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.983 [2024-12-10 03:13:29.203521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.983 [2024-12-10 03:13:29.203634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:34.983 [2024-12-10 03:13:29.203649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.475 ms 00:25:34.983 [2024-12-10 03:13:29.203656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.983 [2024-12-10 03:13:29.203682] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:34.983 [2024-12-10 03:13:29.203694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:34.983 [2024-12-10 03:13:29.203704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:34.983 [2024-12-10 03:13:29.203873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.203880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.203887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.203896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.203904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.203920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.203927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.203935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.203942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.203949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.203957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.203964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.203972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.203979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.203987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.203995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:34.984 [2024-12-10 03:13:29.204480] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:34.984 [2024-12-10 03:13:29.204488] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7042e1d7-5da6-4d4d-8fc3-9955076e703c 00:25:34.984 [2024-12-10 03:13:29.204496] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:34.984 [2024-12-10 03:13:29.204503] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 30144 00:25:34.984 [2024-12-10 03:13:29.204510] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 29184 00:25:34.984 [2024-12-10 03:13:29.204518] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0329 00:25:34.984 [2024-12-10 03:13:29.204529] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:34.984 [2024-12-10 03:13:29.204542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:34.984 [2024-12-10 03:13:29.204549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:34.984 [2024-12-10 03:13:29.204555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:34.984 [2024-12-10 03:13:29.204562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:34.984 [2024-12-10 03:13:29.204569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.984 [2024-12-10 03:13:29.204577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:34.984 [2024-12-10 03:13:29.204587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:25:34.984 [2024-12-10 03:13:29.204595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.984 [2024-12-10 03:13:29.216821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.984 [2024-12-10 03:13:29.216849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:34.984 [2024-12-10 03:13:29.216864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.210 ms 00:25:34.984 [2024-12-10 03:13:29.216871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.984 [2024-12-10 03:13:29.217215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.984 [2024-12-10 03:13:29.217225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:34.985 [2024-12-10 03:13:29.217233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:25:34.985 [2024-12-10 03:13:29.217240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.985 [2024-12-10 03:13:29.249859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.985 [2024-12-10 03:13:29.249892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:34.985 [2024-12-10 03:13:29.249901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.985 [2024-12-10 03:13:29.249909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.985 [2024-12-10 03:13:29.249961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.985 [2024-12-10 03:13:29.249968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:34.985 [2024-12-10 03:13:29.249976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.985 [2024-12-10 03:13:29.249983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.985 [2024-12-10 03:13:29.250028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.985 [2024-12-10 03:13:29.250037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:34.985 [2024-12-10 03:13:29.250048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.985 [2024-12-10 03:13:29.250055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.985 [2024-12-10 03:13:29.250069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.985 [2024-12-10 03:13:29.250077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:34.985 [2024-12-10 03:13:29.250084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.985 [2024-12-10 03:13:29.250091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.985 [2024-12-10 03:13:29.326734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.985 [2024-12-10 03:13:29.326774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:34.985 [2024-12-10 03:13:29.326784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.985 [2024-12-10 03:13:29.326791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.246 [2024-12-10 03:13:29.389170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.246 [2024-12-10 03:13:29.389205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.246 [2024-12-10 03:13:29.389215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.246 [2024-12-10 03:13:29.389222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.246 [2024-12-10 03:13:29.389281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.246 [2024-12-10 03:13:29.389291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.246 [2024-12-10 03:13:29.389298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.246 [2024-12-10 03:13:29.389310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.246 [2024-12-10 03:13:29.389344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.246 [2024-12-10 03:13:29.389352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.246 [2024-12-10 03:13:29.389360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.246 [2024-12-10 03:13:29.389367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.246 [2024-12-10 03:13:29.389469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.246 [2024-12-10 03:13:29.389479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.246 [2024-12-10 03:13:29.389487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.246 [2024-12-10 03:13:29.389495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.246 [2024-12-10 03:13:29.389546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.246 [2024-12-10 03:13:29.389556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:35.246 [2024-12-10 03:13:29.389564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.246 [2024-12-10 03:13:29.389571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.246 [2024-12-10 03:13:29.389603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.246 [2024-12-10 03:13:29.389612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.246 [2024-12-10 03:13:29.389620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.246 [2024-12-10 03:13:29.389627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.246 [2024-12-10 03:13:29.389665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.246 [2024-12-10 03:13:29.389675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.246 [2024-12-10 03:13:29.389682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.246 [2024-12-10 03:13:29.389690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.246 [2024-12-10 03:13:29.389797] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 398.223 ms, result 0 00:25:35.811 00:25:35.811 00:25:35.811 03:13:30 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:38.337 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:38.337 03:13:32 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:38.337 03:13:32 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:38.337 03:13:32 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:38.337 03:13:32 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:38.337 03:13:32 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:38.337 03:13:32 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77270 00:25:38.337 03:13:32 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77270 ']' 00:25:38.337 03:13:32 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77270 00:25:38.337 Process with pid 77270 is not found 00:25:38.337 Remove shared memory files 00:25:38.337 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77270) - No such process 00:25:38.337 03:13:32 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77270 is not found' 00:25:38.337 03:13:32 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:38.337 03:13:32 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:38.337 03:13:32 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:38.337 03:13:32 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:38.337 03:13:32 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:38.337 03:13:32 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:38.337 03:13:32 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:38.337 ************************************ 00:25:38.338 END TEST ftl_restore 00:25:38.338 ************************************ 00:25:38.338 00:25:38.338 real 5m4.822s 00:25:38.338 user 4m53.117s 00:25:38.338 sys 0m11.141s 00:25:38.338 03:13:32 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:38.338 03:13:32 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:38.338 03:13:32 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:38.338 03:13:32 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:38.338 03:13:32 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:38.338 03:13:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:38.338 ************************************ 00:25:38.338 START TEST ftl_dirty_shutdown 00:25:38.338 ************************************ 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:38.338 * Looking for test storage... 00:25:38.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:38.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.338 --rc genhtml_branch_coverage=1 00:25:38.338 --rc genhtml_function_coverage=1 00:25:38.338 --rc genhtml_legend=1 00:25:38.338 --rc geninfo_all_blocks=1 00:25:38.338 --rc geninfo_unexecuted_blocks=1 00:25:38.338 00:25:38.338 ' 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:38.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.338 --rc genhtml_branch_coverage=1 00:25:38.338 --rc genhtml_function_coverage=1 00:25:38.338 --rc genhtml_legend=1 00:25:38.338 --rc geninfo_all_blocks=1 00:25:38.338 --rc geninfo_unexecuted_blocks=1 00:25:38.338 00:25:38.338 ' 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:38.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.338 --rc genhtml_branch_coverage=1 00:25:38.338 --rc genhtml_function_coverage=1 00:25:38.338 --rc genhtml_legend=1 00:25:38.338 --rc geninfo_all_blocks=1 00:25:38.338 --rc geninfo_unexecuted_blocks=1 00:25:38.338 00:25:38.338 ' 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:38.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.338 --rc genhtml_branch_coverage=1 00:25:38.338 --rc genhtml_function_coverage=1 00:25:38.338 --rc genhtml_legend=1 00:25:38.338 --rc geninfo_all_blocks=1 00:25:38.338 --rc geninfo_unexecuted_blocks=1 00:25:38.338 00:25:38.338 ' 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80469 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80469 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80469 ']' 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:38.338 03:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:38.338 [2024-12-10 03:13:32.679393] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:38.338 [2024-12-10 03:13:32.679646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80469 ] 00:25:38.596 [2024-12-10 03:13:32.836810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.596 [2024-12-10 03:13:32.930779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.161 03:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:39.161 03:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:39.161 03:13:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:39.161 03:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:39.161 03:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:39.161 03:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:39.161 03:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:39.161 03:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:39.419 03:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:39.419 03:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:39.677 03:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:39.677 03:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:39.677 03:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:39.677 03:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:39.677 03:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:39.677 03:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:39.677 03:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:39.677 { 00:25:39.677 "name": "nvme0n1", 00:25:39.677 "aliases": [ 00:25:39.677 "82e1cb8a-4ece-4620-bdc1-2fee7363169e" 00:25:39.677 ], 00:25:39.677 "product_name": "NVMe disk", 00:25:39.677 "block_size": 4096, 00:25:39.677 "num_blocks": 1310720, 00:25:39.677 "uuid": "82e1cb8a-4ece-4620-bdc1-2fee7363169e", 00:25:39.677 "numa_id": -1, 00:25:39.677 "assigned_rate_limits": { 00:25:39.677 "rw_ios_per_sec": 0, 00:25:39.677 "rw_mbytes_per_sec": 0, 00:25:39.677 "r_mbytes_per_sec": 0, 00:25:39.677 "w_mbytes_per_sec": 0 00:25:39.677 }, 00:25:39.677 "claimed": true, 00:25:39.677 "claim_type": "read_many_write_one", 00:25:39.677 "zoned": false, 00:25:39.677 "supported_io_types": { 00:25:39.677 "read": true, 00:25:39.677 "write": true, 00:25:39.677 "unmap": true, 00:25:39.677 "flush": true, 00:25:39.677 "reset": true, 00:25:39.677 "nvme_admin": true, 00:25:39.677 "nvme_io": true, 00:25:39.677 "nvme_io_md": false, 00:25:39.677 "write_zeroes": true, 00:25:39.677 "zcopy": false, 00:25:39.677 "get_zone_info": false, 00:25:39.677 "zone_management": false, 00:25:39.677 "zone_append": false, 00:25:39.677 "compare": true, 00:25:39.677 "compare_and_write": false, 00:25:39.677 "abort": true, 00:25:39.677 "seek_hole": false, 00:25:39.677 "seek_data": false, 00:25:39.677 "copy": true, 00:25:39.677 "nvme_iov_md": false 00:25:39.677 }, 00:25:39.677 "driver_specific": { 00:25:39.677 "nvme": [ 00:25:39.677 { 00:25:39.677 "pci_address": "0000:00:11.0", 00:25:39.677 "trid": { 00:25:39.677 "trtype": "PCIe", 00:25:39.677 "traddr": "0000:00:11.0" 00:25:39.677 }, 00:25:39.677 "ctrlr_data": { 00:25:39.677 "cntlid": 0, 00:25:39.677 "vendor_id": "0x1b36", 00:25:39.677 "model_number": "QEMU NVMe Ctrl", 00:25:39.677 "serial_number": "12341", 00:25:39.677 "firmware_revision": "8.0.0", 00:25:39.677 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:39.677 "oacs": { 00:25:39.677 "security": 0, 00:25:39.677 "format": 1, 00:25:39.677 "firmware": 0, 00:25:39.677 "ns_manage": 1 00:25:39.677 }, 00:25:39.677 "multi_ctrlr": false, 00:25:39.677 "ana_reporting": false 00:25:39.677 }, 00:25:39.677 "vs": { 00:25:39.677 "nvme_version": "1.4" 00:25:39.677 }, 00:25:39.677 "ns_data": { 00:25:39.677 "id": 1, 00:25:39.677 "can_share": false 00:25:39.677 } 00:25:39.677 } 00:25:39.677 ], 00:25:39.677 "mp_policy": "active_passive" 00:25:39.677 } 00:25:39.677 } 00:25:39.677 ]' 00:25:39.677 03:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:39.677 03:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:39.677 03:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:39.935 03:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:39.935 03:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:39.935 03:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:39.935 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:39.935 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:39.935 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:39.935 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:39.935 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:39.935 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=79510b54-bed7-43d9-a4d5-083382eb4b70 00:25:39.935 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:39.935 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 79510b54-bed7-43d9-a4d5-083382eb4b70 00:25:40.193 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:40.451 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=41297c5e-30fe-48b6-9311-ebbd4c602642 00:25:40.451 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 41297c5e-30fe-48b6-9311-ebbd4c602642 00:25:40.709 03:13:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=787dc656-1f44-41a7-ad01-059657ffbd32 00:25:40.709 03:13:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:40.709 03:13:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 787dc656-1f44-41a7-ad01-059657ffbd32 00:25:40.709 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:40.709 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:40.709 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=787dc656-1f44-41a7-ad01-059657ffbd32 00:25:40.709 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:40.709 03:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 787dc656-1f44-41a7-ad01-059657ffbd32 00:25:40.709 03:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=787dc656-1f44-41a7-ad01-059657ffbd32 00:25:40.709 03:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:40.709 03:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:40.709 03:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:40.709 03:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 787dc656-1f44-41a7-ad01-059657ffbd32 00:25:40.966 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:40.966 { 00:25:40.966 "name": "787dc656-1f44-41a7-ad01-059657ffbd32", 00:25:40.966 "aliases": [ 00:25:40.966 "lvs/nvme0n1p0" 00:25:40.966 ], 00:25:40.966 "product_name": "Logical Volume", 00:25:40.966 "block_size": 4096, 00:25:40.966 "num_blocks": 26476544, 00:25:40.966 "uuid": "787dc656-1f44-41a7-ad01-059657ffbd32", 00:25:40.966 "assigned_rate_limits": { 00:25:40.966 "rw_ios_per_sec": 0, 00:25:40.966 "rw_mbytes_per_sec": 0, 00:25:40.966 "r_mbytes_per_sec": 0, 00:25:40.966 "w_mbytes_per_sec": 0 00:25:40.966 }, 00:25:40.966 "claimed": false, 00:25:40.966 "zoned": false, 00:25:40.966 "supported_io_types": { 00:25:40.966 "read": true, 00:25:40.966 "write": true, 00:25:40.966 "unmap": true, 00:25:40.966 "flush": false, 00:25:40.966 "reset": true, 00:25:40.966 "nvme_admin": false, 00:25:40.966 "nvme_io": false, 00:25:40.966 "nvme_io_md": false, 00:25:40.966 "write_zeroes": true, 00:25:40.966 "zcopy": false, 00:25:40.966 "get_zone_info": false, 00:25:40.966 "zone_management": false, 00:25:40.966 "zone_append": false, 00:25:40.966 "compare": false, 00:25:40.966 "compare_and_write": false, 00:25:40.966 "abort": false, 00:25:40.966 "seek_hole": true, 00:25:40.966 "seek_data": true, 00:25:40.966 "copy": false, 00:25:40.966 "nvme_iov_md": false 00:25:40.966 }, 00:25:40.966 "driver_specific": { 00:25:40.966 "lvol": { 00:25:40.966 "lvol_store_uuid": "41297c5e-30fe-48b6-9311-ebbd4c602642", 00:25:40.966 "base_bdev": "nvme0n1", 00:25:40.966 "thin_provision": true, 00:25:40.966 "num_allocated_clusters": 0, 00:25:40.966 "snapshot": false, 00:25:40.966 "clone": false, 00:25:40.966 "esnap_clone": false 00:25:40.966 } 00:25:40.966 } 00:25:40.966 } 00:25:40.966 ]' 00:25:40.966 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:40.966 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:40.966 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:40.966 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:40.966 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:40.966 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:40.966 03:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:40.966 03:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:40.966 03:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:41.223 03:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:41.223 03:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:41.223 03:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 787dc656-1f44-41a7-ad01-059657ffbd32 00:25:41.223 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=787dc656-1f44-41a7-ad01-059657ffbd32 00:25:41.223 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:41.223 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:41.223 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:41.223 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 787dc656-1f44-41a7-ad01-059657ffbd32 00:25:41.480 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:41.480 { 00:25:41.480 "name": "787dc656-1f44-41a7-ad01-059657ffbd32", 00:25:41.480 "aliases": [ 00:25:41.480 "lvs/nvme0n1p0" 00:25:41.480 ], 00:25:41.480 "product_name": "Logical Volume", 00:25:41.481 "block_size": 4096, 00:25:41.481 "num_blocks": 26476544, 00:25:41.481 "uuid": "787dc656-1f44-41a7-ad01-059657ffbd32", 00:25:41.481 "assigned_rate_limits": { 00:25:41.481 "rw_ios_per_sec": 0, 00:25:41.481 "rw_mbytes_per_sec": 0, 00:25:41.481 "r_mbytes_per_sec": 0, 00:25:41.481 "w_mbytes_per_sec": 0 00:25:41.481 }, 00:25:41.481 "claimed": false, 00:25:41.481 "zoned": false, 00:25:41.481 "supported_io_types": { 00:25:41.481 "read": true, 00:25:41.481 "write": true, 00:25:41.481 "unmap": true, 00:25:41.481 "flush": false, 00:25:41.481 "reset": true, 00:25:41.481 "nvme_admin": false, 00:25:41.481 "nvme_io": false, 00:25:41.481 "nvme_io_md": false, 00:25:41.481 "write_zeroes": true, 00:25:41.481 "zcopy": false, 00:25:41.481 "get_zone_info": false, 00:25:41.481 "zone_management": false, 00:25:41.481 "zone_append": false, 00:25:41.481 "compare": false, 00:25:41.481 "compare_and_write": false, 00:25:41.481 "abort": false, 00:25:41.481 "seek_hole": true, 00:25:41.481 "seek_data": true, 00:25:41.481 "copy": false, 00:25:41.481 "nvme_iov_md": false 00:25:41.481 }, 00:25:41.481 "driver_specific": { 00:25:41.481 "lvol": { 00:25:41.481 "lvol_store_uuid": "41297c5e-30fe-48b6-9311-ebbd4c602642", 00:25:41.481 "base_bdev": "nvme0n1", 00:25:41.481 "thin_provision": true, 00:25:41.481 "num_allocated_clusters": 0, 00:25:41.481 "snapshot": false, 00:25:41.481 "clone": false, 00:25:41.481 "esnap_clone": false 00:25:41.481 } 00:25:41.481 } 00:25:41.481 } 00:25:41.481 ]' 00:25:41.481 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:41.481 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:41.481 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:41.481 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:41.481 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:41.481 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:41.481 03:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:41.481 03:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:41.739 03:13:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:41.739 03:13:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 787dc656-1f44-41a7-ad01-059657ffbd32 00:25:41.739 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=787dc656-1f44-41a7-ad01-059657ffbd32 00:25:41.739 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:41.739 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:41.739 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:41.739 03:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 787dc656-1f44-41a7-ad01-059657ffbd32 00:25:41.739 03:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:41.739 { 00:25:41.739 "name": "787dc656-1f44-41a7-ad01-059657ffbd32", 00:25:41.739 "aliases": [ 00:25:41.739 "lvs/nvme0n1p0" 00:25:41.739 ], 00:25:41.739 "product_name": "Logical Volume", 00:25:41.739 "block_size": 4096, 00:25:41.739 "num_blocks": 26476544, 00:25:41.739 "uuid": "787dc656-1f44-41a7-ad01-059657ffbd32", 00:25:41.739 "assigned_rate_limits": { 00:25:41.739 "rw_ios_per_sec": 0, 00:25:41.739 "rw_mbytes_per_sec": 0, 00:25:41.739 "r_mbytes_per_sec": 0, 00:25:41.739 "w_mbytes_per_sec": 0 00:25:41.739 }, 00:25:41.739 "claimed": false, 00:25:41.739 "zoned": false, 00:25:41.739 "supported_io_types": { 00:25:41.739 "read": true, 00:25:41.739 "write": true, 00:25:41.739 "unmap": true, 00:25:41.739 "flush": false, 00:25:41.739 "reset": true, 00:25:41.739 "nvme_admin": false, 00:25:41.739 "nvme_io": false, 00:25:41.739 "nvme_io_md": false, 00:25:41.739 "write_zeroes": true, 00:25:41.739 "zcopy": false, 00:25:41.739 "get_zone_info": false, 00:25:41.739 "zone_management": false, 00:25:41.739 "zone_append": false, 00:25:41.739 "compare": false, 00:25:41.739 "compare_and_write": false, 00:25:41.739 "abort": false, 00:25:41.739 "seek_hole": true, 00:25:41.739 "seek_data": true, 00:25:41.739 "copy": false, 00:25:41.739 "nvme_iov_md": false 00:25:41.739 }, 00:25:41.739 "driver_specific": { 00:25:41.739 "lvol": { 00:25:41.739 "lvol_store_uuid": "41297c5e-30fe-48b6-9311-ebbd4c602642", 00:25:41.739 "base_bdev": "nvme0n1", 00:25:41.739 "thin_provision": true, 00:25:41.739 "num_allocated_clusters": 0, 00:25:41.739 "snapshot": false, 00:25:41.739 "clone": false, 00:25:41.739 "esnap_clone": false 00:25:41.739 } 00:25:41.739 } 00:25:41.739 } 00:25:41.739 ]' 00:25:41.739 03:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:41.998 03:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:41.998 03:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:41.998 03:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:41.998 03:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:41.998 03:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:41.998 03:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:41.998 03:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 787dc656-1f44-41a7-ad01-059657ffbd32 --l2p_dram_limit 10' 00:25:41.998 03:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:41.998 03:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:41.998 03:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:41.998 03:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 787dc656-1f44-41a7-ad01-059657ffbd32 --l2p_dram_limit 10 -c nvc0n1p0 00:25:41.998 [2024-12-10 03:13:36.330909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.998 [2024-12-10 03:13:36.330946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:41.998 [2024-12-10 03:13:36.330961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:41.998 [2024-12-10 03:13:36.330967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.998 [2024-12-10 03:13:36.331014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.998 [2024-12-10 03:13:36.331021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:41.998 [2024-12-10 03:13:36.331028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:41.998 [2024-12-10 03:13:36.331034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.998 [2024-12-10 03:13:36.331053] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:41.998 [2024-12-10 03:13:36.331703] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:41.999 [2024-12-10 03:13:36.331727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.999 [2024-12-10 03:13:36.331733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:41.999 [2024-12-10 03:13:36.331741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:25:41.999 [2024-12-10 03:13:36.331747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.999 [2024-12-10 03:13:36.331771] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 201fe23d-9e7b-4c75-b1d6-41bdd989f7c9 00:25:41.999 [2024-12-10 03:13:36.332712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.999 [2024-12-10 03:13:36.332734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:41.999 [2024-12-10 03:13:36.332741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:41.999 [2024-12-10 03:13:36.332748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.999 [2024-12-10 03:13:36.337371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.999 [2024-12-10 03:13:36.337406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:41.999 [2024-12-10 03:13:36.337425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.568 ms 00:25:41.999 [2024-12-10 03:13:36.337432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.999 [2024-12-10 03:13:36.337495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.999 [2024-12-10 03:13:36.337504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:41.999 [2024-12-10 03:13:36.337510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:41.999 [2024-12-10 03:13:36.337520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.999 [2024-12-10 03:13:36.337554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.999 [2024-12-10 03:13:36.337563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:41.999 [2024-12-10 03:13:36.337570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:41.999 [2024-12-10 03:13:36.337577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.999 [2024-12-10 03:13:36.337593] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:41.999 [2024-12-10 03:13:36.340462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.999 [2024-12-10 03:13:36.340565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:41.999 [2024-12-10 03:13:36.340580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.871 ms 00:25:41.999 [2024-12-10 03:13:36.340586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.999 [2024-12-10 03:13:36.340617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.999 [2024-12-10 03:13:36.340624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:41.999 [2024-12-10 03:13:36.340631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:41.999 [2024-12-10 03:13:36.340637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.999 [2024-12-10 03:13:36.340657] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:41.999 [2024-12-10 03:13:36.340767] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:41.999 [2024-12-10 03:13:36.340779] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:41.999 [2024-12-10 03:13:36.340787] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:41.999 [2024-12-10 03:13:36.340796] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:41.999 [2024-12-10 03:13:36.340802] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:41.999 [2024-12-10 03:13:36.340810] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:41.999 [2024-12-10 03:13:36.340816] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:41.999 [2024-12-10 03:13:36.340825] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:41.999 [2024-12-10 03:13:36.340831] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:41.999 [2024-12-10 03:13:36.340837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.999 [2024-12-10 03:13:36.340848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:41.999 [2024-12-10 03:13:36.340856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:25:41.999 [2024-12-10 03:13:36.340861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.999 [2024-12-10 03:13:36.340927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.999 [2024-12-10 03:13:36.340933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:41.999 [2024-12-10 03:13:36.340940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:41.999 [2024-12-10 03:13:36.340945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.999 [2024-12-10 03:13:36.341023] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:41.999 [2024-12-10 03:13:36.341030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:41.999 [2024-12-10 03:13:36.341038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:41.999 [2024-12-10 03:13:36.341044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.999 [2024-12-10 03:13:36.341051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:41.999 [2024-12-10 03:13:36.341057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:41.999 [2024-12-10 03:13:36.341063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:41.999 [2024-12-10 03:13:36.341068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:41.999 [2024-12-10 03:13:36.341076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:41.999 [2024-12-10 03:13:36.341081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:41.999 [2024-12-10 03:13:36.341088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:41.999 [2024-12-10 03:13:36.341093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:41.999 [2024-12-10 03:13:36.341099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:41.999 [2024-12-10 03:13:36.341104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:41.999 [2024-12-10 03:13:36.341111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:41.999 [2024-12-10 03:13:36.341116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.999 [2024-12-10 03:13:36.341124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:41.999 [2024-12-10 03:13:36.341130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:41.999 [2024-12-10 03:13:36.341136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.999 [2024-12-10 03:13:36.341141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:41.999 [2024-12-10 03:13:36.341148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:41.999 [2024-12-10 03:13:36.341153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.999 [2024-12-10 03:13:36.341159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:41.999 [2024-12-10 03:13:36.341164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:41.999 [2024-12-10 03:13:36.341170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.999 [2024-12-10 03:13:36.341174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:41.999 [2024-12-10 03:13:36.341181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:41.999 [2024-12-10 03:13:36.341185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.999 [2024-12-10 03:13:36.341191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:41.999 [2024-12-10 03:13:36.341196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:41.999 [2024-12-10 03:13:36.341203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.999 [2024-12-10 03:13:36.341208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:41.999 [2024-12-10 03:13:36.341215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:41.999 [2024-12-10 03:13:36.341220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:41.999 [2024-12-10 03:13:36.341227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:41.999 [2024-12-10 03:13:36.341232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:41.999 [2024-12-10 03:13:36.341238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:41.999 [2024-12-10 03:13:36.341243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:41.999 [2024-12-10 03:13:36.341249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:41.999 [2024-12-10 03:13:36.341254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.999 [2024-12-10 03:13:36.341261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:41.999 [2024-12-10 03:13:36.341265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:41.999 [2024-12-10 03:13:36.341271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.999 [2024-12-10 03:13:36.341276] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:41.999 [2024-12-10 03:13:36.341283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:41.999 [2024-12-10 03:13:36.341288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:41.999 [2024-12-10 03:13:36.341295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.999 [2024-12-10 03:13:36.341301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:41.999 [2024-12-10 03:13:36.341308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:41.999 [2024-12-10 03:13:36.341314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:41.999 [2024-12-10 03:13:36.341321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:41.999 [2024-12-10 03:13:36.341326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:41.999 [2024-12-10 03:13:36.341332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:41.999 [2024-12-10 03:13:36.341338] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:41.999 [2024-12-10 03:13:36.341348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:41.999 [2024-12-10 03:13:36.341354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:41.999 [2024-12-10 03:13:36.341360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:41.999 [2024-12-10 03:13:36.341365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:42.000 [2024-12-10 03:13:36.341372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:42.000 [2024-12-10 03:13:36.341392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:42.000 [2024-12-10 03:13:36.341400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:42.000 [2024-12-10 03:13:36.341406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:42.000 [2024-12-10 03:13:36.341413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:42.000 [2024-12-10 03:13:36.341418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:42.000 [2024-12-10 03:13:36.341427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:42.000 [2024-12-10 03:13:36.341432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:42.000 [2024-12-10 03:13:36.341439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:42.000 [2024-12-10 03:13:36.341445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:42.000 [2024-12-10 03:13:36.341452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:42.000 [2024-12-10 03:13:36.341458] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:42.000 [2024-12-10 03:13:36.341465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:42.000 [2024-12-10 03:13:36.341471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:42.000 [2024-12-10 03:13:36.341478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:42.000 [2024-12-10 03:13:36.341484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:42.000 [2024-12-10 03:13:36.341491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:42.000 [2024-12-10 03:13:36.341496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.000 [2024-12-10 03:13:36.341503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:42.000 [2024-12-10 03:13:36.341509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:25:42.000 [2024-12-10 03:13:36.341515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.000 [2024-12-10 03:13:36.341555] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:42.000 [2024-12-10 03:13:36.341565] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:45.298 [2024-12-10 03:13:39.338634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.298 [2024-12-10 03:13:39.338697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:45.298 [2024-12-10 03:13:39.338711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2997.067 ms 00:25:45.298 [2024-12-10 03:13:39.338721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.298 [2024-12-10 03:13:39.364253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.298 [2024-12-10 03:13:39.364292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:45.298 [2024-12-10 03:13:39.364305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.331 ms 00:25:45.298 [2024-12-10 03:13:39.364314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.298 [2024-12-10 03:13:39.364449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.298 [2024-12-10 03:13:39.364462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:45.298 [2024-12-10 03:13:39.364471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:25:45.298 [2024-12-10 03:13:39.364484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.298 [2024-12-10 03:13:39.395063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.298 [2024-12-10 03:13:39.395097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:45.298 [2024-12-10 03:13:39.395107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.547 ms 00:25:45.298 [2024-12-10 03:13:39.395117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.298 [2024-12-10 03:13:39.395142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.298 [2024-12-10 03:13:39.395155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:45.298 [2024-12-10 03:13:39.395163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:45.298 [2024-12-10 03:13:39.395177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.298 [2024-12-10 03:13:39.395545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.298 [2024-12-10 03:13:39.395563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:45.298 [2024-12-10 03:13:39.395572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:25:45.298 [2024-12-10 03:13:39.395581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.298 [2024-12-10 03:13:39.395681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.298 [2024-12-10 03:13:39.395691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:45.298 [2024-12-10 03:13:39.395701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:25:45.298 [2024-12-10 03:13:39.395712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.298 [2024-12-10 03:13:39.409492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.298 [2024-12-10 03:13:39.409628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:45.298 [2024-12-10 03:13:39.409644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.763 ms 00:25:45.298 [2024-12-10 03:13:39.409655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.298 [2024-12-10 03:13:39.438357] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:45.298 [2024-12-10 03:13:39.441033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.298 [2024-12-10 03:13:39.441158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:45.298 [2024-12-10 03:13:39.441179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.304 ms 00:25:45.298 [2024-12-10 03:13:39.441187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.298 [2024-12-10 03:13:39.517044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.298 [2024-12-10 03:13:39.517083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:45.298 [2024-12-10 03:13:39.517098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.821 ms 00:25:45.298 [2024-12-10 03:13:39.517107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.298 [2024-12-10 03:13:39.517282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.298 [2024-12-10 03:13:39.517296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:45.298 [2024-12-10 03:13:39.517308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:25:45.298 [2024-12-10 03:13:39.517316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.298 [2024-12-10 03:13:39.540782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.298 [2024-12-10 03:13:39.540812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:45.298 [2024-12-10 03:13:39.540825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.423 ms 00:25:45.298 [2024-12-10 03:13:39.540832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.298 [2024-12-10 03:13:39.562972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.298 [2024-12-10 03:13:39.563002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:45.298 [2024-12-10 03:13:39.563015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.102 ms 00:25:45.298 [2024-12-10 03:13:39.563022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.298 [2024-12-10 03:13:39.563601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.298 [2024-12-10 03:13:39.563616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:45.298 [2024-12-10 03:13:39.563627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:25:45.298 [2024-12-10 03:13:39.563636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.299 [2024-12-10 03:13:39.636642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.299 [2024-12-10 03:13:39.636673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:45.299 [2024-12-10 03:13:39.636688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.975 ms 00:25:45.299 [2024-12-10 03:13:39.636696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.299 [2024-12-10 03:13:39.660903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.299 [2024-12-10 03:13:39.660934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:45.299 [2024-12-10 03:13:39.660947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.141 ms 00:25:45.299 [2024-12-10 03:13:39.660955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.557 [2024-12-10 03:13:39.684370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.557 [2024-12-10 03:13:39.684407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:45.557 [2024-12-10 03:13:39.684419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.381 ms 00:25:45.557 [2024-12-10 03:13:39.684427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.557 [2024-12-10 03:13:39.707844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.557 [2024-12-10 03:13:39.707874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:45.557 [2024-12-10 03:13:39.707887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.384 ms 00:25:45.557 [2024-12-10 03:13:39.707894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.557 [2024-12-10 03:13:39.707944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.557 [2024-12-10 03:13:39.707953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:45.557 [2024-12-10 03:13:39.707966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:45.557 [2024-12-10 03:13:39.707973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.557 [2024-12-10 03:13:39.708054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.557 [2024-12-10 03:13:39.708066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:45.557 [2024-12-10 03:13:39.708075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:45.557 [2024-12-10 03:13:39.708083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.557 [2024-12-10 03:13:39.708901] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3377.595 ms, result 0 00:25:45.557 { 00:25:45.557 "name": "ftl0", 00:25:45.557 "uuid": "201fe23d-9e7b-4c75-b1d6-41bdd989f7c9" 00:25:45.557 } 00:25:45.557 03:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:45.557 03:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:45.557 03:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:45.815 03:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:45.815 03:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:45.815 /dev/nbd0 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:45.815 1+0 records in 00:25:45.815 1+0 records out 00:25:45.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292379 s, 14.0 MB/s 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:45.815 03:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:46.072 [2024-12-10 03:13:40.235045] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:46.072 [2024-12-10 03:13:40.235144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80605 ] 00:25:46.072 [2024-12-10 03:13:40.394912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.329 [2024-12-10 03:13:40.489070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.702  [2024-12-10T03:13:43.024Z] Copying: 194/1024 [MB] (194 MBps) [2024-12-10T03:13:43.957Z] Copying: 389/1024 [MB] (195 MBps) [2024-12-10T03:13:44.891Z] Copying: 634/1024 [MB] (244 MBps) [2024-12-10T03:13:45.455Z] Copying: 887/1024 [MB] (253 MBps) [2024-12-10T03:13:46.021Z] Copying: 1024/1024 [MB] (average 225 MBps) 00:25:51.633 00:25:51.633 03:13:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:54.161 03:13:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:54.161 [2024-12-10 03:13:48.041198] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:25:54.161 [2024-12-10 03:13:48.041313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80688 ] 00:25:54.161 [2024-12-10 03:13:48.201763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.161 [2024-12-10 03:13:48.294990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.547  [2024-12-10T03:13:50.543Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-10T03:13:51.916Z] Copying: 41/1024 [MB] (16 MBps) [2024-12-10T03:13:52.850Z] Copying: 68/1024 [MB] (27 MBps) [2024-12-10T03:13:53.783Z] Copying: 103/1024 [MB] (35 MBps) [2024-12-10T03:13:54.715Z] Copying: 132/1024 [MB] (28 MBps) [2024-12-10T03:13:55.646Z] Copying: 160/1024 [MB] (28 MBps) [2024-12-10T03:13:56.579Z] Copying: 190/1024 [MB] (29 MBps) [2024-12-10T03:13:57.522Z] Copying: 220/1024 [MB] (29 MBps) [2024-12-10T03:13:58.903Z] Copying: 248/1024 [MB] (28 MBps) [2024-12-10T03:13:59.837Z] Copying: 271/1024 [MB] (23 MBps) [2024-12-10T03:14:00.770Z] Copying: 300/1024 [MB] (28 MBps) [2024-12-10T03:14:01.768Z] Copying: 329/1024 [MB] (28 MBps) [2024-12-10T03:14:02.702Z] Copying: 360/1024 [MB] (30 MBps) [2024-12-10T03:14:03.642Z] Copying: 389/1024 [MB] (29 MBps) [2024-12-10T03:14:04.587Z] Copying: 417/1024 [MB] (28 MBps) [2024-12-10T03:14:05.531Z] Copying: 441/1024 [MB] (24 MBps) [2024-12-10T03:14:06.918Z] Copying: 473/1024 [MB] (31 MBps) [2024-12-10T03:14:07.860Z] Copying: 499/1024 [MB] (25 MBps) [2024-12-10T03:14:08.802Z] Copying: 526/1024 [MB] (26 MBps) [2024-12-10T03:14:09.747Z] Copying: 550/1024 [MB] (24 MBps) [2024-12-10T03:14:10.690Z] Copying: 580/1024 [MB] (29 MBps) [2024-12-10T03:14:11.633Z] Copying: 611/1024 [MB] (31 MBps) [2024-12-10T03:14:12.616Z] Copying: 640/1024 [MB] (29 MBps) [2024-12-10T03:14:13.557Z] Copying: 671/1024 [MB] (31 MBps) [2024-12-10T03:14:14.944Z] Copying: 702/1024 [MB] (30 MBps) [2024-12-10T03:14:15.516Z] Copying: 724/1024 [MB] (22 MBps) [2024-12-10T03:14:16.902Z] Copying: 753/1024 [MB] (28 MBps) [2024-12-10T03:14:17.846Z] Copying: 781/1024 [MB] (28 MBps) [2024-12-10T03:14:18.789Z] Copying: 810/1024 [MB] (29 MBps) [2024-12-10T03:14:19.733Z] Copying: 837/1024 [MB] (27 MBps) [2024-12-10T03:14:20.677Z] Copying: 863/1024 [MB] (26 MBps) [2024-12-10T03:14:21.620Z] Copying: 890/1024 [MB] (26 MBps) [2024-12-10T03:14:22.566Z] Copying: 920/1024 [MB] (30 MBps) [2024-12-10T03:14:24.009Z] Copying: 950/1024 [MB] (30 MBps) [2024-12-10T03:14:24.582Z] Copying: 974/1024 [MB] (24 MBps) [2024-12-10T03:14:25.525Z] Copying: 1001/1024 [MB] (26 MBps) [2024-12-10T03:14:26.102Z] Copying: 1024/1024 [MB] (average 27 MBps) 00:26:31.714 00:26:31.714 03:14:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:31.714 03:14:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:31.975 03:14:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:31.975 [2024-12-10 03:14:26.316479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.975 [2024-12-10 03:14:26.316518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:31.975 [2024-12-10 03:14:26.316529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:31.975 [2024-12-10 03:14:26.316537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.975 [2024-12-10 03:14:26.316557] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:31.975 [2024-12-10 03:14:26.318648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.975 [2024-12-10 03:14:26.318674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:31.975 [2024-12-10 03:14:26.318684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.075 ms 00:26:31.975 [2024-12-10 03:14:26.318690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.975 [2024-12-10 03:14:26.320846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.975 [2024-12-10 03:14:26.320875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:31.975 [2024-12-10 03:14:26.320884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:26:31.975 [2024-12-10 03:14:26.320891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.975 [2024-12-10 03:14:26.334308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.975 [2024-12-10 03:14:26.334337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:31.975 [2024-12-10 03:14:26.334347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.399 ms 00:26:31.975 [2024-12-10 03:14:26.334353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.975 [2024-12-10 03:14:26.339264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.975 [2024-12-10 03:14:26.339289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:31.975 [2024-12-10 03:14:26.339299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.878 ms 00:26:31.975 [2024-12-10 03:14:26.339306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.238 [2024-12-10 03:14:26.357554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.238 [2024-12-10 03:14:26.357582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:32.238 [2024-12-10 03:14:26.357591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.191 ms 00:26:32.238 [2024-12-10 03:14:26.357598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.238 [2024-12-10 03:14:26.369434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.238 [2024-12-10 03:14:26.369464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:32.238 [2024-12-10 03:14:26.369477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.805 ms 00:26:32.238 [2024-12-10 03:14:26.369483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.238 [2024-12-10 03:14:26.369589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.238 [2024-12-10 03:14:26.369601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:32.238 [2024-12-10 03:14:26.369609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:32.238 [2024-12-10 03:14:26.369615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.238 [2024-12-10 03:14:26.386996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.238 [2024-12-10 03:14:26.387023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:32.238 [2024-12-10 03:14:26.387033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.365 ms 00:26:32.238 [2024-12-10 03:14:26.387038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.238 [2024-12-10 03:14:26.404365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.238 [2024-12-10 03:14:26.404401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:32.238 [2024-12-10 03:14:26.404410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.297 ms 00:26:32.238 [2024-12-10 03:14:26.404416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.238 [2024-12-10 03:14:26.421588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.238 [2024-12-10 03:14:26.421615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:32.238 [2024-12-10 03:14:26.421624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.140 ms 00:26:32.238 [2024-12-10 03:14:26.421630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.238 [2024-12-10 03:14:26.438826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.238 [2024-12-10 03:14:26.438861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:32.238 [2024-12-10 03:14:26.438871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.115 ms 00:26:32.238 [2024-12-10 03:14:26.438876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.238 [2024-12-10 03:14:26.438904] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:32.238 [2024-12-10 03:14:26.438915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.438924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.438930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.438936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.438942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.438949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.438954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.438963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.438969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.438976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.438982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.438989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.438994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:32.238 [2024-12-10 03:14:26.439211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:32.239 [2024-12-10 03:14:26.439579] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:32.239 [2024-12-10 03:14:26.439586] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 201fe23d-9e7b-4c75-b1d6-41bdd989f7c9 00:26:32.239 [2024-12-10 03:14:26.439592] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:32.239 [2024-12-10 03:14:26.439600] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:32.239 [2024-12-10 03:14:26.439607] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:32.239 [2024-12-10 03:14:26.439614] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:32.239 [2024-12-10 03:14:26.439619] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:32.239 [2024-12-10 03:14:26.439626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:32.239 [2024-12-10 03:14:26.439632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:32.239 [2024-12-10 03:14:26.439638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:32.239 [2024-12-10 03:14:26.439642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:32.239 [2024-12-10 03:14:26.439649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.239 [2024-12-10 03:14:26.439654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:32.239 [2024-12-10 03:14:26.439661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:26:32.239 [2024-12-10 03:14:26.439667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.239 [2024-12-10 03:14:26.449116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.239 [2024-12-10 03:14:26.449143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:32.239 [2024-12-10 03:14:26.449152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.424 ms 00:26:32.239 [2024-12-10 03:14:26.449158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.239 [2024-12-10 03:14:26.449439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.239 [2024-12-10 03:14:26.449451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:32.239 [2024-12-10 03:14:26.449459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:26:32.239 [2024-12-10 03:14:26.449464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.239 [2024-12-10 03:14:26.482565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.239 [2024-12-10 03:14:26.482593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:32.239 [2024-12-10 03:14:26.482603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.239 [2024-12-10 03:14:26.482609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.239 [2024-12-10 03:14:26.482653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.239 [2024-12-10 03:14:26.482660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:32.239 [2024-12-10 03:14:26.482667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.239 [2024-12-10 03:14:26.482673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.239 [2024-12-10 03:14:26.482723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.239 [2024-12-10 03:14:26.482732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:32.239 [2024-12-10 03:14:26.482740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.239 [2024-12-10 03:14:26.482745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.239 [2024-12-10 03:14:26.482761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.239 [2024-12-10 03:14:26.482767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:32.239 [2024-12-10 03:14:26.482773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.239 [2024-12-10 03:14:26.482779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.239 [2024-12-10 03:14:26.542848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.239 [2024-12-10 03:14:26.542890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:32.239 [2024-12-10 03:14:26.542900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.240 [2024-12-10 03:14:26.542906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.240 [2024-12-10 03:14:26.591589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.240 [2024-12-10 03:14:26.591621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:32.240 [2024-12-10 03:14:26.591631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.240 [2024-12-10 03:14:26.591638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.240 [2024-12-10 03:14:26.591727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.240 [2024-12-10 03:14:26.591736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:32.240 [2024-12-10 03:14:26.591746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.240 [2024-12-10 03:14:26.591752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.240 [2024-12-10 03:14:26.591791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.240 [2024-12-10 03:14:26.591801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:32.240 [2024-12-10 03:14:26.591809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.240 [2024-12-10 03:14:26.591814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.240 [2024-12-10 03:14:26.591882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.240 [2024-12-10 03:14:26.591892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:32.240 [2024-12-10 03:14:26.591900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.240 [2024-12-10 03:14:26.591907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.240 [2024-12-10 03:14:26.591939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.240 [2024-12-10 03:14:26.591946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:32.240 [2024-12-10 03:14:26.591954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.240 [2024-12-10 03:14:26.591960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.240 [2024-12-10 03:14:26.591990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.240 [2024-12-10 03:14:26.591996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:32.240 [2024-12-10 03:14:26.592003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.240 [2024-12-10 03:14:26.592010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.240 [2024-12-10 03:14:26.592046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.240 [2024-12-10 03:14:26.592053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:32.240 [2024-12-10 03:14:26.592060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.240 [2024-12-10 03:14:26.592067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.240 [2024-12-10 03:14:26.592169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.661 ms, result 0 00:26:32.240 true 00:26:32.240 03:14:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80469 00:26:32.240 03:14:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80469 00:26:32.501 03:14:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:32.501 [2024-12-10 03:14:26.685441] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:32.501 [2024-12-10 03:14:26.685561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81101 ] 00:26:32.501 [2024-12-10 03:14:26.841088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.762 [2024-12-10 03:14:26.916467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.149  [2024-12-10T03:14:29.108Z] Copying: 257/1024 [MB] (257 MBps) [2024-12-10T03:14:30.495Z] Copying: 514/1024 [MB] (257 MBps) [2024-12-10T03:14:31.439Z] Copying: 770/1024 [MB] (255 MBps) [2024-12-10T03:14:31.439Z] Copying: 1023/1024 [MB] (252 MBps) [2024-12-10T03:14:31.701Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:26:37.313 00:26:37.313 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80469 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:37.313 03:14:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:37.574 [2024-12-10 03:14:31.723484] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:37.574 [2024-12-10 03:14:31.723606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81155 ] 00:26:37.574 [2024-12-10 03:14:31.878186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.574 [2024-12-10 03:14:31.953409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.835 [2024-12-10 03:14:32.163746] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:37.835 [2024-12-10 03:14:32.163798] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:38.098 [2024-12-10 03:14:32.226312] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:38.098 [2024-12-10 03:14:32.226514] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:38.098 [2024-12-10 03:14:32.226730] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:38.098 [2024-12-10 03:14:32.428595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.098 [2024-12-10 03:14:32.428630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:38.098 [2024-12-10 03:14:32.428640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:38.098 [2024-12-10 03:14:32.428648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.098 [2024-12-10 03:14:32.428683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.098 [2024-12-10 03:14:32.428692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:38.098 [2024-12-10 03:14:32.428698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:38.098 [2024-12-10 03:14:32.428704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.098 [2024-12-10 03:14:32.428717] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:38.098 [2024-12-10 03:14:32.429215] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:38.098 [2024-12-10 03:14:32.429233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.098 [2024-12-10 03:14:32.429240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:38.098 [2024-12-10 03:14:32.429246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:26:38.098 [2024-12-10 03:14:32.429252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.098 [2024-12-10 03:14:32.430163] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:38.098 [2024-12-10 03:14:32.439530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.098 [2024-12-10 03:14:32.439557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:38.098 [2024-12-10 03:14:32.439566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.368 ms 00:26:38.098 [2024-12-10 03:14:32.439572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.098 [2024-12-10 03:14:32.439615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.098 [2024-12-10 03:14:32.439622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:38.098 [2024-12-10 03:14:32.439629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:38.098 [2024-12-10 03:14:32.439634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.098 [2024-12-10 03:14:32.443926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.098 [2024-12-10 03:14:32.443950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:38.098 [2024-12-10 03:14:32.443957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.251 ms 00:26:38.098 [2024-12-10 03:14:32.443963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.098 [2024-12-10 03:14:32.444016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.098 [2024-12-10 03:14:32.444023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:38.098 [2024-12-10 03:14:32.444029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:38.098 [2024-12-10 03:14:32.444034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.098 [2024-12-10 03:14:32.444067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.098 [2024-12-10 03:14:32.444074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:38.098 [2024-12-10 03:14:32.444080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:38.098 [2024-12-10 03:14:32.444085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.098 [2024-12-10 03:14:32.444098] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:38.098 [2024-12-10 03:14:32.446737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.098 [2024-12-10 03:14:32.446761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:38.098 [2024-12-10 03:14:32.446768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:26:38.098 [2024-12-10 03:14:32.446774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.099 [2024-12-10 03:14:32.446800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.099 [2024-12-10 03:14:32.446807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:38.099 [2024-12-10 03:14:32.446814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:38.099 [2024-12-10 03:14:32.446819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.099 [2024-12-10 03:14:32.446834] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:38.099 [2024-12-10 03:14:32.446849] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:38.099 [2024-12-10 03:14:32.446879] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:38.099 [2024-12-10 03:14:32.446890] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:38.099 [2024-12-10 03:14:32.446969] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:38.099 [2024-12-10 03:14:32.446977] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:38.099 [2024-12-10 03:14:32.446985] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:38.099 [2024-12-10 03:14:32.446994] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:38.099 [2024-12-10 03:14:32.447001] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:38.099 [2024-12-10 03:14:32.447007] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:38.099 [2024-12-10 03:14:32.447012] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:38.099 [2024-12-10 03:14:32.447017] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:38.099 [2024-12-10 03:14:32.447023] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:38.099 [2024-12-10 03:14:32.447030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.099 [2024-12-10 03:14:32.447035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:38.099 [2024-12-10 03:14:32.447041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:26:38.099 [2024-12-10 03:14:32.447046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.099 [2024-12-10 03:14:32.447109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.099 [2024-12-10 03:14:32.447116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:38.099 [2024-12-10 03:14:32.447122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:38.099 [2024-12-10 03:14:32.447128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.099 [2024-12-10 03:14:32.447203] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:38.099 [2024-12-10 03:14:32.447210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:38.099 [2024-12-10 03:14:32.447216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:38.099 [2024-12-10 03:14:32.447222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.099 [2024-12-10 03:14:32.447228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:38.099 [2024-12-10 03:14:32.447233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:38.099 [2024-12-10 03:14:32.447240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:38.099 [2024-12-10 03:14:32.447245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:38.099 [2024-12-10 03:14:32.447251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:38.099 [2024-12-10 03:14:32.447259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:38.099 [2024-12-10 03:14:32.447264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:38.099 [2024-12-10 03:14:32.447269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:38.099 [2024-12-10 03:14:32.447274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:38.099 [2024-12-10 03:14:32.447279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:38.099 [2024-12-10 03:14:32.447284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:38.099 [2024-12-10 03:14:32.447289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.099 [2024-12-10 03:14:32.447294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:38.099 [2024-12-10 03:14:32.447299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:38.099 [2024-12-10 03:14:32.447304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.099 [2024-12-10 03:14:32.447309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:38.099 [2024-12-10 03:14:32.447314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:38.099 [2024-12-10 03:14:32.447318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.099 [2024-12-10 03:14:32.447323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:38.099 [2024-12-10 03:14:32.447328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:38.099 [2024-12-10 03:14:32.447333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.099 [2024-12-10 03:14:32.447337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:38.099 [2024-12-10 03:14:32.447342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:38.099 [2024-12-10 03:14:32.447347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.099 [2024-12-10 03:14:32.447352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:38.099 [2024-12-10 03:14:32.447357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:38.099 [2024-12-10 03:14:32.447361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.099 [2024-12-10 03:14:32.447366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:38.099 [2024-12-10 03:14:32.447371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:38.099 [2024-12-10 03:14:32.447385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:38.099 [2024-12-10 03:14:32.447390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:38.099 [2024-12-10 03:14:32.447395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:38.099 [2024-12-10 03:14:32.447399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:38.099 [2024-12-10 03:14:32.447404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:38.099 [2024-12-10 03:14:32.447415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:38.099 [2024-12-10 03:14:32.447421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.099 [2024-12-10 03:14:32.447426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:38.099 [2024-12-10 03:14:32.447430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:38.099 [2024-12-10 03:14:32.447435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.099 [2024-12-10 03:14:32.447440] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:38.099 [2024-12-10 03:14:32.447446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:38.099 [2024-12-10 03:14:32.447453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:38.099 [2024-12-10 03:14:32.447459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.099 [2024-12-10 03:14:32.447465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:38.099 [2024-12-10 03:14:32.447470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:38.099 [2024-12-10 03:14:32.447475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:38.099 [2024-12-10 03:14:32.447481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:38.099 [2024-12-10 03:14:32.447486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:38.099 [2024-12-10 03:14:32.447491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:38.099 [2024-12-10 03:14:32.447497] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:38.099 [2024-12-10 03:14:32.447504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:38.099 [2024-12-10 03:14:32.447510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:38.099 [2024-12-10 03:14:32.447515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:38.099 [2024-12-10 03:14:32.447520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:38.099 [2024-12-10 03:14:32.447525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:38.099 [2024-12-10 03:14:32.447530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:38.099 [2024-12-10 03:14:32.447536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:38.099 [2024-12-10 03:14:32.447541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:38.099 [2024-12-10 03:14:32.447546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:38.099 [2024-12-10 03:14:32.447551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:38.099 [2024-12-10 03:14:32.447556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:38.099 [2024-12-10 03:14:32.447561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:38.099 [2024-12-10 03:14:32.447566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:38.099 [2024-12-10 03:14:32.447571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:38.099 [2024-12-10 03:14:32.447576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:38.099 [2024-12-10 03:14:32.447581] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:38.099 [2024-12-10 03:14:32.447589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:38.099 [2024-12-10 03:14:32.447595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:38.099 [2024-12-10 03:14:32.447600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:38.099 [2024-12-10 03:14:32.447606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:38.100 [2024-12-10 03:14:32.447611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:38.100 [2024-12-10 03:14:32.447617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.100 [2024-12-10 03:14:32.447622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:38.100 [2024-12-10 03:14:32.447628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:26:38.100 [2024-12-10 03:14:32.447633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.100 [2024-12-10 03:14:32.468146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.100 [2024-12-10 03:14:32.468173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:38.100 [2024-12-10 03:14:32.468181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.479 ms 00:26:38.100 [2024-12-10 03:14:32.468187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.100 [2024-12-10 03:14:32.468255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.100 [2024-12-10 03:14:32.468261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:38.100 [2024-12-10 03:14:32.468267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:38.100 [2024-12-10 03:14:32.468273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.362 [2024-12-10 03:14:32.500057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.362 [2024-12-10 03:14:32.500093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:38.362 [2024-12-10 03:14:32.500107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.742 ms 00:26:38.362 [2024-12-10 03:14:32.500114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.362 [2024-12-10 03:14:32.500159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.362 [2024-12-10 03:14:32.500166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:38.362 [2024-12-10 03:14:32.500173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:38.362 [2024-12-10 03:14:32.500179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.362 [2024-12-10 03:14:32.500520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.362 [2024-12-10 03:14:32.500534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:38.362 [2024-12-10 03:14:32.500541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:26:38.362 [2024-12-10 03:14:32.500551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.362 [2024-12-10 03:14:32.500650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.362 [2024-12-10 03:14:32.500657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:38.362 [2024-12-10 03:14:32.500663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:26:38.362 [2024-12-10 03:14:32.500669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.362 [2024-12-10 03:14:32.511484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.362 [2024-12-10 03:14:32.511507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:38.362 [2024-12-10 03:14:32.511515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.798 ms 00:26:38.362 [2024-12-10 03:14:32.511521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.362 [2024-12-10 03:14:32.521546] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:38.362 [2024-12-10 03:14:32.521574] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:38.362 [2024-12-10 03:14:32.521584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.362 [2024-12-10 03:14:32.521591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:38.362 [2024-12-10 03:14:32.521598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.986 ms 00:26:38.362 [2024-12-10 03:14:32.521604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.362 [2024-12-10 03:14:32.540939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.362 [2024-12-10 03:14:32.540973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:38.362 [2024-12-10 03:14:32.540985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.091 ms 00:26:38.362 [2024-12-10 03:14:32.540993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.362 [2024-12-10 03:14:32.549966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.362 [2024-12-10 03:14:32.549992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:38.362 [2024-12-10 03:14:32.550000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.939 ms 00:26:38.362 [2024-12-10 03:14:32.550005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.362 [2024-12-10 03:14:32.558649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.362 [2024-12-10 03:14:32.558671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:38.362 [2024-12-10 03:14:32.558679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.617 ms 00:26:38.362 [2024-12-10 03:14:32.558684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.362 [2024-12-10 03:14:32.559160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.362 [2024-12-10 03:14:32.559177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:38.362 [2024-12-10 03:14:32.559184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:26:38.362 [2024-12-10 03:14:32.559190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.362 [2024-12-10 03:14:32.603039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.362 [2024-12-10 03:14:32.603080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:38.362 [2024-12-10 03:14:32.603090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.836 ms 00:26:38.362 [2024-12-10 03:14:32.603096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.362 [2024-12-10 03:14:32.611097] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:38.362 [2024-12-10 03:14:32.613144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.362 [2024-12-10 03:14:32.613167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:38.362 [2024-12-10 03:14:32.613176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.001 ms 00:26:38.362 [2024-12-10 03:14:32.613187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.362 [2024-12-10 03:14:32.613254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.362 [2024-12-10 03:14:32.613264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:38.362 [2024-12-10 03:14:32.613272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:38.362 [2024-12-10 03:14:32.613278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.362 [2024-12-10 03:14:32.613331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.362 [2024-12-10 03:14:32.613345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:38.363 [2024-12-10 03:14:32.613352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:38.363 [2024-12-10 03:14:32.613358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.363 [2024-12-10 03:14:32.613393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.363 [2024-12-10 03:14:32.613401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:38.363 [2024-12-10 03:14:32.613407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:38.363 [2024-12-10 03:14:32.613413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.363 [2024-12-10 03:14:32.613440] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:38.363 [2024-12-10 03:14:32.613449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.363 [2024-12-10 03:14:32.613455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:38.363 [2024-12-10 03:14:32.613461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:38.363 [2024-12-10 03:14:32.613470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.363 [2024-12-10 03:14:32.631056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.363 [2024-12-10 03:14:32.631081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:38.363 [2024-12-10 03:14:32.631090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.571 ms 00:26:38.363 [2024-12-10 03:14:32.631097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.363 [2024-12-10 03:14:32.631151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.363 [2024-12-10 03:14:32.631159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:38.363 [2024-12-10 03:14:32.631166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:38.363 [2024-12-10 03:14:32.631171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.363 [2024-12-10 03:14:32.631943] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 202.992 ms, result 0 00:26:39.305  [2024-12-10T03:14:35.111Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-10T03:14:35.706Z] Copying: 60/1024 [MB] (33 MBps) [2024-12-10T03:14:36.651Z] Copying: 70/1024 [MB] (10 MBps) [2024-12-10T03:14:38.040Z] Copying: 81/1024 [MB] (10 MBps) [2024-12-10T03:14:38.984Z] Copying: 97/1024 [MB] (15 MBps) [2024-12-10T03:14:39.928Z] Copying: 110/1024 [MB] (13 MBps) [2024-12-10T03:14:40.873Z] Copying: 129/1024 [MB] (18 MBps) [2024-12-10T03:14:41.818Z] Copying: 141/1024 [MB] (11 MBps) [2024-12-10T03:14:42.788Z] Copying: 170/1024 [MB] (28 MBps) [2024-12-10T03:14:43.733Z] Copying: 191/1024 [MB] (21 MBps) [2024-12-10T03:14:44.678Z] Copying: 206/1024 [MB] (15 MBps) [2024-12-10T03:14:46.061Z] Copying: 224/1024 [MB] (17 MBps) [2024-12-10T03:14:47.004Z] Copying: 242/1024 [MB] (17 MBps) [2024-12-10T03:14:47.949Z] Copying: 252/1024 [MB] (10 MBps) [2024-12-10T03:14:48.891Z] Copying: 271/1024 [MB] (19 MBps) [2024-12-10T03:14:49.837Z] Copying: 287/1024 [MB] (15 MBps) [2024-12-10T03:14:50.780Z] Copying: 297/1024 [MB] (10 MBps) [2024-12-10T03:14:51.723Z] Copying: 321/1024 [MB] (23 MBps) [2024-12-10T03:14:52.669Z] Copying: 353/1024 [MB] (31 MBps) [2024-12-10T03:14:54.058Z] Copying: 367/1024 [MB] (13 MBps) [2024-12-10T03:14:55.002Z] Copying: 383/1024 [MB] (16 MBps) [2024-12-10T03:14:55.952Z] Copying: 403/1024 [MB] (19 MBps) [2024-12-10T03:14:56.896Z] Copying: 417/1024 [MB] (14 MBps) [2024-12-10T03:14:57.839Z] Copying: 431/1024 [MB] (14 MBps) [2024-12-10T03:14:58.781Z] Copying: 462/1024 [MB] (30 MBps) [2024-12-10T03:14:59.725Z] Copying: 492/1024 [MB] (29 MBps) [2024-12-10T03:15:00.669Z] Copying: 508/1024 [MB] (16 MBps) [2024-12-10T03:15:02.062Z] Copying: 525/1024 [MB] (16 MBps) [2024-12-10T03:15:02.722Z] Copying: 537/1024 [MB] (12 MBps) [2024-12-10T03:15:03.666Z] Copying: 555/1024 [MB] (17 MBps) [2024-12-10T03:15:05.056Z] Copying: 565/1024 [MB] (10 MBps) [2024-12-10T03:15:05.999Z] Copying: 578/1024 [MB] (12 MBps) [2024-12-10T03:15:06.941Z] Copying: 591/1024 [MB] (13 MBps) [2024-12-10T03:15:07.884Z] Copying: 621/1024 [MB] (29 MBps) [2024-12-10T03:15:08.826Z] Copying: 643/1024 [MB] (22 MBps) [2024-12-10T03:15:09.770Z] Copying: 666/1024 [MB] (22 MBps) [2024-12-10T03:15:10.713Z] Copying: 679/1024 [MB] (13 MBps) [2024-12-10T03:15:11.657Z] Copying: 695/1024 [MB] (15 MBps) [2024-12-10T03:15:13.043Z] Copying: 724/1024 [MB] (28 MBps) [2024-12-10T03:15:13.985Z] Copying: 748/1024 [MB] (24 MBps) [2024-12-10T03:15:14.959Z] Copying: 765/1024 [MB] (17 MBps) [2024-12-10T03:15:15.917Z] Copying: 783/1024 [MB] (17 MBps) [2024-12-10T03:15:16.859Z] Copying: 820/1024 [MB] (37 MBps) [2024-12-10T03:15:17.800Z] Copying: 835/1024 [MB] (15 MBps) [2024-12-10T03:15:18.742Z] Copying: 845/1024 [MB] (10 MBps) [2024-12-10T03:15:19.686Z] Copying: 867/1024 [MB] (21 MBps) [2024-12-10T03:15:21.079Z] Copying: 898/1024 [MB] (30 MBps) [2024-12-10T03:15:21.652Z] Copying: 915/1024 [MB] (16 MBps) [2024-12-10T03:15:23.041Z] Copying: 932/1024 [MB] (17 MBps) [2024-12-10T03:15:23.986Z] Copying: 945/1024 [MB] (13 MBps) [2024-12-10T03:15:24.930Z] Copying: 959/1024 [MB] (13 MBps) [2024-12-10T03:15:25.874Z] Copying: 970/1024 [MB] (10 MBps) [2024-12-10T03:15:26.819Z] Copying: 985/1024 [MB] (15 MBps) [2024-12-10T03:15:27.763Z] Copying: 996/1024 [MB] (10 MBps) [2024-12-10T03:15:28.705Z] Copying: 1007/1024 [MB] (10 MBps) [2024-12-10T03:15:29.278Z] Copying: 1023/1024 [MB] (16 MBps) [2024-12-10T03:15:29.278Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-10 03:15:29.219961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.890 [2024-12-10 03:15:29.220045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:34.890 [2024-12-10 03:15:29.220063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:34.890 [2024-12-10 03:15:29.220072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.890 [2024-12-10 03:15:29.220104] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:34.890 [2024-12-10 03:15:29.223675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.890 [2024-12-10 03:15:29.223715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:34.890 [2024-12-10 03:15:29.223726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.548 ms 00:27:34.890 [2024-12-10 03:15:29.223745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.890 [2024-12-10 03:15:29.235304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.890 [2024-12-10 03:15:29.235372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:34.890 [2024-12-10 03:15:29.235408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.585 ms 00:27:34.890 [2024-12-10 03:15:29.235419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.890 [2024-12-10 03:15:29.261855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.890 [2024-12-10 03:15:29.262087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:34.890 [2024-12-10 03:15:29.262113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.416 ms 00:27:34.890 [2024-12-10 03:15:29.262123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.890 [2024-12-10 03:15:29.268393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.890 [2024-12-10 03:15:29.268571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:34.890 [2024-12-10 03:15:29.268591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.200 ms 00:27:34.891 [2024-12-10 03:15:29.268601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.151 [2024-12-10 03:15:29.295831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.151 [2024-12-10 03:15:29.296047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:35.151 [2024-12-10 03:15:29.296070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.176 ms 00:27:35.151 [2024-12-10 03:15:29.296079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.151 [2024-12-10 03:15:29.312700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.151 [2024-12-10 03:15:29.312752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:35.151 [2024-12-10 03:15:29.312766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.578 ms 00:27:35.151 [2024-12-10 03:15:29.312775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.433 [2024-12-10 03:15:29.589202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.433 [2024-12-10 03:15:29.589274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:35.433 [2024-12-10 03:15:29.589296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 276.368 ms 00:27:35.433 [2024-12-10 03:15:29.589304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.433 [2024-12-10 03:15:29.616147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.433 [2024-12-10 03:15:29.616194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:35.433 [2024-12-10 03:15:29.616206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.826 ms 00:27:35.433 [2024-12-10 03:15:29.616227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.433 [2024-12-10 03:15:29.642142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.433 [2024-12-10 03:15:29.642189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:35.433 [2024-12-10 03:15:29.642201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.864 ms 00:27:35.433 [2024-12-10 03:15:29.642209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.433 [2024-12-10 03:15:29.668010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.433 [2024-12-10 03:15:29.668064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:35.433 [2024-12-10 03:15:29.668077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.752 ms 00:27:35.433 [2024-12-10 03:15:29.668084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.433 [2024-12-10 03:15:29.693849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.434 [2024-12-10 03:15:29.694058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:35.434 [2024-12-10 03:15:29.694080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.672 ms 00:27:35.434 [2024-12-10 03:15:29.694088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.434 [2024-12-10 03:15:29.694200] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:35.434 [2024-12-10 03:15:29.694234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 99328 / 261120 wr_cnt: 1 state: open 00:27:35.434 [2024-12-10 03:15:29.694246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:35.434 [2024-12-10 03:15:29.694956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.694963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.694970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.694977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.694985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.694992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.695000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.695007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.695014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.695021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.695028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.695036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.695045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.695052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.695060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.695067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:35.435 [2024-12-10 03:15:29.695083] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:35.435 [2024-12-10 03:15:29.695092] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 201fe23d-9e7b-4c75-b1d6-41bdd989f7c9 00:27:35.435 [2024-12-10 03:15:29.695112] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 99328 00:27:35.435 [2024-12-10 03:15:29.695120] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 100288 00:27:35.435 [2024-12-10 03:15:29.695128] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 99328 00:27:35.435 [2024-12-10 03:15:29.695137] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0097 00:27:35.435 [2024-12-10 03:15:29.695145] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:35.435 [2024-12-10 03:15:29.695153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:35.435 [2024-12-10 03:15:29.695161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:35.435 [2024-12-10 03:15:29.695167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:35.435 [2024-12-10 03:15:29.695173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:35.435 [2024-12-10 03:15:29.695182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.435 [2024-12-10 03:15:29.695190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:35.435 [2024-12-10 03:15:29.695199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:27:35.435 [2024-12-10 03:15:29.695207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.435 [2024-12-10 03:15:29.708771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.435 [2024-12-10 03:15:29.708954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:35.435 [2024-12-10 03:15:29.708972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.527 ms 00:27:35.435 [2024-12-10 03:15:29.708981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.435 [2024-12-10 03:15:29.709371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.435 [2024-12-10 03:15:29.709426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:35.435 [2024-12-10 03:15:29.709445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:27:35.435 [2024-12-10 03:15:29.709453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.435 [2024-12-10 03:15:29.746504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.435 [2024-12-10 03:15:29.746554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:35.435 [2024-12-10 03:15:29.746568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.435 [2024-12-10 03:15:29.746577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.435 [2024-12-10 03:15:29.746640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.435 [2024-12-10 03:15:29.746649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:35.435 [2024-12-10 03:15:29.746664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.435 [2024-12-10 03:15:29.746672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.435 [2024-12-10 03:15:29.746762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.435 [2024-12-10 03:15:29.746774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:35.435 [2024-12-10 03:15:29.746784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.435 [2024-12-10 03:15:29.746793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.435 [2024-12-10 03:15:29.746809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.435 [2024-12-10 03:15:29.746818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:35.435 [2024-12-10 03:15:29.746827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.435 [2024-12-10 03:15:29.746836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.708 [2024-12-10 03:15:29.832340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.708 [2024-12-10 03:15:29.832414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:35.708 [2024-12-10 03:15:29.832428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.708 [2024-12-10 03:15:29.832437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.708 [2024-12-10 03:15:29.902594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.708 [2024-12-10 03:15:29.902812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:35.708 [2024-12-10 03:15:29.902833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.708 [2024-12-10 03:15:29.902852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.708 [2024-12-10 03:15:29.902921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.709 [2024-12-10 03:15:29.902932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:35.709 [2024-12-10 03:15:29.902940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.709 [2024-12-10 03:15:29.902948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.709 [2024-12-10 03:15:29.903009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.709 [2024-12-10 03:15:29.903019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:35.709 [2024-12-10 03:15:29.903028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.709 [2024-12-10 03:15:29.903036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.709 [2024-12-10 03:15:29.903146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.709 [2024-12-10 03:15:29.903158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:35.709 [2024-12-10 03:15:29.903167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.709 [2024-12-10 03:15:29.903175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.709 [2024-12-10 03:15:29.903208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.709 [2024-12-10 03:15:29.903218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:35.709 [2024-12-10 03:15:29.903227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.709 [2024-12-10 03:15:29.903235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.709 [2024-12-10 03:15:29.903282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.709 [2024-12-10 03:15:29.903292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:35.709 [2024-12-10 03:15:29.903300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.709 [2024-12-10 03:15:29.903309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.709 [2024-12-10 03:15:29.903358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.709 [2024-12-10 03:15:29.903369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:35.709 [2024-12-10 03:15:29.903416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.709 [2024-12-10 03:15:29.903425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.709 [2024-12-10 03:15:29.903569] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 683.598 ms, result 0 00:27:37.097 00:27:37.097 00:27:37.097 03:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:39.014 03:15:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:39.014 [2024-12-10 03:15:33.302424] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:27:39.014 [2024-12-10 03:15:33.302539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81779 ] 00:27:39.277 [2024-12-10 03:15:33.462437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.277 [2024-12-10 03:15:33.566574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.536 [2024-12-10 03:15:33.865239] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:39.536 [2024-12-10 03:15:33.865325] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:39.798 [2024-12-10 03:15:34.023393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.798 [2024-12-10 03:15:34.023437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:39.798 [2024-12-10 03:15:34.023450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:39.798 [2024-12-10 03:15:34.023458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.798 [2024-12-10 03:15:34.023506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.798 [2024-12-10 03:15:34.023518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:39.798 [2024-12-10 03:15:34.023527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:39.798 [2024-12-10 03:15:34.023534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.798 [2024-12-10 03:15:34.023553] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:39.798 [2024-12-10 03:15:34.024240] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:39.798 [2024-12-10 03:15:34.024255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.798 [2024-12-10 03:15:34.024263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:39.798 [2024-12-10 03:15:34.024272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:27:39.798 [2024-12-10 03:15:34.024279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.798 [2024-12-10 03:15:34.025336] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:39.798 [2024-12-10 03:15:34.038276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.798 [2024-12-10 03:15:34.038317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:39.798 [2024-12-10 03:15:34.038333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.941 ms 00:27:39.798 [2024-12-10 03:15:34.038346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.798 [2024-12-10 03:15:34.038432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.798 [2024-12-10 03:15:34.038442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:39.798 [2024-12-10 03:15:34.038450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:39.798 [2024-12-10 03:15:34.038457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.798 [2024-12-10 03:15:34.043406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.798 [2024-12-10 03:15:34.043433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:39.798 [2024-12-10 03:15:34.043442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.900 ms 00:27:39.798 [2024-12-10 03:15:34.043454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.798 [2024-12-10 03:15:34.043524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.798 [2024-12-10 03:15:34.043533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:39.798 [2024-12-10 03:15:34.043541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:39.798 [2024-12-10 03:15:34.043548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.798 [2024-12-10 03:15:34.043598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.798 [2024-12-10 03:15:34.043607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:39.798 [2024-12-10 03:15:34.043615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:39.798 [2024-12-10 03:15:34.043622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.798 [2024-12-10 03:15:34.043645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:39.798 [2024-12-10 03:15:34.047084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.798 [2024-12-10 03:15:34.047110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:39.798 [2024-12-10 03:15:34.047122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.443 ms 00:27:39.798 [2024-12-10 03:15:34.047130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.798 [2024-12-10 03:15:34.047159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.798 [2024-12-10 03:15:34.047167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:39.798 [2024-12-10 03:15:34.047175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:39.798 [2024-12-10 03:15:34.047182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.798 [2024-12-10 03:15:34.047201] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:39.798 [2024-12-10 03:15:34.047220] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:39.798 [2024-12-10 03:15:34.047255] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:39.798 [2024-12-10 03:15:34.047272] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:39.798 [2024-12-10 03:15:34.047383] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:39.798 [2024-12-10 03:15:34.047394] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:39.798 [2024-12-10 03:15:34.047404] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:39.798 [2024-12-10 03:15:34.047413] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:39.798 [2024-12-10 03:15:34.047422] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:39.798 [2024-12-10 03:15:34.047430] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:39.798 [2024-12-10 03:15:34.047437] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:39.798 [2024-12-10 03:15:34.047447] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:39.798 [2024-12-10 03:15:34.047454] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:39.798 [2024-12-10 03:15:34.047461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.798 [2024-12-10 03:15:34.047469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:39.798 [2024-12-10 03:15:34.047476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:27:39.798 [2024-12-10 03:15:34.047483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.798 [2024-12-10 03:15:34.047565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.798 [2024-12-10 03:15:34.047573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:39.798 [2024-12-10 03:15:34.047580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:39.798 [2024-12-10 03:15:34.047587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.798 [2024-12-10 03:15:34.047700] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:39.798 [2024-12-10 03:15:34.047710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:39.798 [2024-12-10 03:15:34.047718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:39.798 [2024-12-10 03:15:34.047725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.798 [2024-12-10 03:15:34.047732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:39.798 [2024-12-10 03:15:34.047739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:39.799 [2024-12-10 03:15:34.047746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:39.799 [2024-12-10 03:15:34.047753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:39.799 [2024-12-10 03:15:34.047760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:39.799 [2024-12-10 03:15:34.047766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:39.799 [2024-12-10 03:15:34.047773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:39.799 [2024-12-10 03:15:34.047780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:39.799 [2024-12-10 03:15:34.047787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:39.799 [2024-12-10 03:15:34.047799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:39.799 [2024-12-10 03:15:34.047806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:39.799 [2024-12-10 03:15:34.047812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.799 [2024-12-10 03:15:34.047819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:39.799 [2024-12-10 03:15:34.047825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:39.799 [2024-12-10 03:15:34.047831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.799 [2024-12-10 03:15:34.047838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:39.799 [2024-12-10 03:15:34.047844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:39.799 [2024-12-10 03:15:34.047851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.799 [2024-12-10 03:15:34.047858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:39.799 [2024-12-10 03:15:34.047864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:39.799 [2024-12-10 03:15:34.047870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.799 [2024-12-10 03:15:34.047877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:39.799 [2024-12-10 03:15:34.047883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:39.799 [2024-12-10 03:15:34.047890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.799 [2024-12-10 03:15:34.047896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:39.799 [2024-12-10 03:15:34.047903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:39.799 [2024-12-10 03:15:34.047909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.799 [2024-12-10 03:15:34.047915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:39.799 [2024-12-10 03:15:34.047929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:39.799 [2024-12-10 03:15:34.047936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:39.799 [2024-12-10 03:15:34.047942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:39.799 [2024-12-10 03:15:34.047948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:39.799 [2024-12-10 03:15:34.047954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:39.799 [2024-12-10 03:15:34.047961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:39.799 [2024-12-10 03:15:34.047967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:39.799 [2024-12-10 03:15:34.047973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.799 [2024-12-10 03:15:34.047980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:39.799 [2024-12-10 03:15:34.047987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:39.799 [2024-12-10 03:15:34.047994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.799 [2024-12-10 03:15:34.048002] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:39.799 [2024-12-10 03:15:34.048009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:39.799 [2024-12-10 03:15:34.048017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:39.799 [2024-12-10 03:15:34.048024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.799 [2024-12-10 03:15:34.048032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:39.799 [2024-12-10 03:15:34.048038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:39.799 [2024-12-10 03:15:34.048045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:39.799 [2024-12-10 03:15:34.048051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:39.799 [2024-12-10 03:15:34.048057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:39.799 [2024-12-10 03:15:34.048064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:39.799 [2024-12-10 03:15:34.048072] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:39.799 [2024-12-10 03:15:34.048081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:39.799 [2024-12-10 03:15:34.048092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:39.799 [2024-12-10 03:15:34.048099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:39.799 [2024-12-10 03:15:34.048106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:39.799 [2024-12-10 03:15:34.048113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:39.799 [2024-12-10 03:15:34.048120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:39.799 [2024-12-10 03:15:34.048126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:39.799 [2024-12-10 03:15:34.048133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:39.799 [2024-12-10 03:15:34.048140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:39.799 [2024-12-10 03:15:34.048147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:39.799 [2024-12-10 03:15:34.048153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:39.799 [2024-12-10 03:15:34.048160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:39.799 [2024-12-10 03:15:34.048167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:39.799 [2024-12-10 03:15:34.048173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:39.799 [2024-12-10 03:15:34.048180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:39.799 [2024-12-10 03:15:34.048187] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:39.799 [2024-12-10 03:15:34.048195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:39.799 [2024-12-10 03:15:34.048204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:39.799 [2024-12-10 03:15:34.048211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:39.799 [2024-12-10 03:15:34.048218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:39.799 [2024-12-10 03:15:34.048225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:39.799 [2024-12-10 03:15:34.048235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.799 [2024-12-10 03:15:34.048242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:39.799 [2024-12-10 03:15:34.048250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:27:39.799 [2024-12-10 03:15:34.048257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.799 [2024-12-10 03:15:34.074903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.799 [2024-12-10 03:15:34.075026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:39.799 [2024-12-10 03:15:34.075079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.601 ms 00:27:39.799 [2024-12-10 03:15:34.075107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.799 [2024-12-10 03:15:34.075203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.799 [2024-12-10 03:15:34.075224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:39.799 [2024-12-10 03:15:34.075244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:39.799 [2024-12-10 03:15:34.075262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.799 [2024-12-10 03:15:34.119679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.799 [2024-12-10 03:15:34.119828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:39.799 [2024-12-10 03:15:34.119886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.352 ms 00:27:39.799 [2024-12-10 03:15:34.119910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.799 [2024-12-10 03:15:34.119971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.799 [2024-12-10 03:15:34.119996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:39.799 [2024-12-10 03:15:34.120022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:39.799 [2024-12-10 03:15:34.120041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.799 [2024-12-10 03:15:34.120464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.799 [2024-12-10 03:15:34.120613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:39.799 [2024-12-10 03:15:34.120665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:27:39.799 [2024-12-10 03:15:34.120687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.799 [2024-12-10 03:15:34.120835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.799 [2024-12-10 03:15:34.120860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:39.799 [2024-12-10 03:15:34.120915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:27:39.799 [2024-12-10 03:15:34.120936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.799 [2024-12-10 03:15:34.134459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.799 [2024-12-10 03:15:34.134574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:39.799 [2024-12-10 03:15:34.134624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.492 ms 00:27:39.799 [2024-12-10 03:15:34.134646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.799 [2024-12-10 03:15:34.147619] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:39.799 [2024-12-10 03:15:34.147753] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:39.800 [2024-12-10 03:15:34.147813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.800 [2024-12-10 03:15:34.147833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:39.800 [2024-12-10 03:15:34.147853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.061 ms 00:27:39.800 [2024-12-10 03:15:34.147871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.800 [2024-12-10 03:15:34.172149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.800 [2024-12-10 03:15:34.172274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:39.800 [2024-12-10 03:15:34.172325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.234 ms 00:27:39.800 [2024-12-10 03:15:34.172346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.061 [2024-12-10 03:15:34.184555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.061 [2024-12-10 03:15:34.184699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:40.061 [2024-12-10 03:15:34.184717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.860 ms 00:27:40.061 [2024-12-10 03:15:34.184726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.061 [2024-12-10 03:15:34.196358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.061 [2024-12-10 03:15:34.196407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:40.061 [2024-12-10 03:15:34.196418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.597 ms 00:27:40.061 [2024-12-10 03:15:34.196426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.061 [2024-12-10 03:15:34.197035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.061 [2024-12-10 03:15:34.197061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:40.061 [2024-12-10 03:15:34.197074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:27:40.061 [2024-12-10 03:15:34.197081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.061 [2024-12-10 03:15:34.255652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.061 [2024-12-10 03:15:34.255698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:40.061 [2024-12-10 03:15:34.255718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.554 ms 00:27:40.061 [2024-12-10 03:15:34.255726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.061 [2024-12-10 03:15:34.266546] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:40.061 [2024-12-10 03:15:34.269130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.061 [2024-12-10 03:15:34.269289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:40.061 [2024-12-10 03:15:34.269307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.360 ms 00:27:40.061 [2024-12-10 03:15:34.269315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.061 [2024-12-10 03:15:34.269431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.061 [2024-12-10 03:15:34.269443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:40.061 [2024-12-10 03:15:34.269455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:40.061 [2024-12-10 03:15:34.269463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.061 [2024-12-10 03:15:34.270919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.061 [2024-12-10 03:15:34.270959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:40.061 [2024-12-10 03:15:34.270970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.414 ms 00:27:40.061 [2024-12-10 03:15:34.270978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.061 [2024-12-10 03:15:34.271004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.061 [2024-12-10 03:15:34.271012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:40.061 [2024-12-10 03:15:34.271021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:40.061 [2024-12-10 03:15:34.271029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.061 [2024-12-10 03:15:34.271069] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:40.061 [2024-12-10 03:15:34.271080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.061 [2024-12-10 03:15:34.271087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:40.062 [2024-12-10 03:15:34.271096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:40.062 [2024-12-10 03:15:34.271103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.062 [2024-12-10 03:15:34.295640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.062 [2024-12-10 03:15:34.295806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:40.062 [2024-12-10 03:15:34.295834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.519 ms 00:27:40.062 [2024-12-10 03:15:34.295843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.062 [2024-12-10 03:15:34.295915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.062 [2024-12-10 03:15:34.295946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:40.062 [2024-12-10 03:15:34.295955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:40.062 [2024-12-10 03:15:34.295963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.062 [2024-12-10 03:15:34.297162] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 273.279 ms, result 0 00:27:41.449  [2024-12-10T03:15:36.783Z] Copying: 1152/1048576 [kB] (1152 kBps) [2024-12-10T03:15:37.728Z] Copying: 4872/1048576 [kB] (3720 kBps) [2024-12-10T03:15:38.670Z] Copying: 21/1024 [MB] (16 MBps) [2024-12-10T03:15:39.621Z] Copying: 42/1024 [MB] (21 MBps) [2024-12-10T03:15:40.564Z] Copying: 59/1024 [MB] (16 MBps) [2024-12-10T03:15:41.508Z] Copying: 89/1024 [MB] (30 MBps) [2024-12-10T03:15:42.895Z] Copying: 114/1024 [MB] (24 MBps) [2024-12-10T03:15:43.870Z] Copying: 131/1024 [MB] (17 MBps) [2024-12-10T03:15:44.829Z] Copying: 163/1024 [MB] (31 MBps) [2024-12-10T03:15:45.769Z] Copying: 193/1024 [MB] (29 MBps) [2024-12-10T03:15:46.713Z] Copying: 227/1024 [MB] (34 MBps) [2024-12-10T03:15:47.658Z] Copying: 249/1024 [MB] (21 MBps) [2024-12-10T03:15:48.602Z] Copying: 273/1024 [MB] (24 MBps) [2024-12-10T03:15:49.545Z] Copying: 297/1024 [MB] (23 MBps) [2024-12-10T03:15:50.491Z] Copying: 326/1024 [MB] (29 MBps) [2024-12-10T03:15:51.878Z] Copying: 352/1024 [MB] (26 MBps) [2024-12-10T03:15:52.824Z] Copying: 380/1024 [MB] (27 MBps) [2024-12-10T03:15:53.769Z] Copying: 398/1024 [MB] (17 MBps) [2024-12-10T03:15:54.712Z] Copying: 427/1024 [MB] (29 MBps) [2024-12-10T03:15:55.670Z] Copying: 453/1024 [MB] (26 MBps) [2024-12-10T03:15:56.615Z] Copying: 472/1024 [MB] (19 MBps) [2024-12-10T03:15:57.559Z] Copying: 503/1024 [MB] (30 MBps) [2024-12-10T03:15:58.524Z] Copying: 537/1024 [MB] (34 MBps) [2024-12-10T03:15:59.922Z] Copying: 561/1024 [MB] (23 MBps) [2024-12-10T03:16:00.497Z] Copying: 596/1024 [MB] (34 MBps) [2024-12-10T03:16:01.884Z] Copying: 619/1024 [MB] (23 MBps) [2024-12-10T03:16:02.830Z] Copying: 637/1024 [MB] (17 MBps) [2024-12-10T03:16:03.774Z] Copying: 658/1024 [MB] (21 MBps) [2024-12-10T03:16:04.720Z] Copying: 677/1024 [MB] (19 MBps) [2024-12-10T03:16:05.670Z] Copying: 704/1024 [MB] (26 MBps) [2024-12-10T03:16:06.616Z] Copying: 732/1024 [MB] (27 MBps) [2024-12-10T03:16:07.560Z] Copying: 751/1024 [MB] (19 MBps) [2024-12-10T03:16:08.504Z] Copying: 783/1024 [MB] (32 MBps) [2024-12-10T03:16:09.889Z] Copying: 801/1024 [MB] (17 MBps) [2024-12-10T03:16:10.833Z] Copying: 834/1024 [MB] (33 MBps) [2024-12-10T03:16:11.776Z] Copying: 859/1024 [MB] (25 MBps) [2024-12-10T03:16:12.722Z] Copying: 891/1024 [MB] (32 MBps) [2024-12-10T03:16:13.699Z] Copying: 914/1024 [MB] (22 MBps) [2024-12-10T03:16:14.643Z] Copying: 928/1024 [MB] (14 MBps) [2024-12-10T03:16:15.597Z] Copying: 954/1024 [MB] (25 MBps) [2024-12-10T03:16:16.544Z] Copying: 985/1024 [MB] (31 MBps) [2024-12-10T03:16:17.490Z] Copying: 1009/1024 [MB] (23 MBps) [2024-12-10T03:16:17.490Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-10 03:16:17.264944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.102 [2024-12-10 03:16:17.265033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:23.102 [2024-12-10 03:16:17.265052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:23.102 [2024-12-10 03:16:17.265064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.102 [2024-12-10 03:16:17.265092] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:23.102 [2024-12-10 03:16:17.269017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.102 [2024-12-10 03:16:17.269066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:23.102 [2024-12-10 03:16:17.269080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.904 ms 00:28:23.103 [2024-12-10 03:16:17.269090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.103 [2024-12-10 03:16:17.269370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.103 [2024-12-10 03:16:17.269414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:23.103 [2024-12-10 03:16:17.269426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:28:23.103 [2024-12-10 03:16:17.269435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.103 [2024-12-10 03:16:17.284087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.103 [2024-12-10 03:16:17.284144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:23.103 [2024-12-10 03:16:17.284158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.630 ms 00:28:23.103 [2024-12-10 03:16:17.284166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.103 [2024-12-10 03:16:17.290623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.103 [2024-12-10 03:16:17.290666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:23.103 [2024-12-10 03:16:17.290690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.416 ms 00:28:23.103 [2024-12-10 03:16:17.290698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.103 [2024-12-10 03:16:17.317701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.103 [2024-12-10 03:16:17.317748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:23.103 [2024-12-10 03:16:17.317761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.937 ms 00:28:23.103 [2024-12-10 03:16:17.317769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.103 [2024-12-10 03:16:17.334501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.103 [2024-12-10 03:16:17.334547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:23.103 [2024-12-10 03:16:17.334560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.683 ms 00:28:23.103 [2024-12-10 03:16:17.334569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.103 [2024-12-10 03:16:17.338079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.103 [2024-12-10 03:16:17.338127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:23.103 [2024-12-10 03:16:17.338141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.456 ms 00:28:23.103 [2024-12-10 03:16:17.338157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.103 [2024-12-10 03:16:17.364090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.103 [2024-12-10 03:16:17.364294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:23.103 [2024-12-10 03:16:17.364316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.914 ms 00:28:23.103 [2024-12-10 03:16:17.364324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.103 [2024-12-10 03:16:17.390113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.103 [2024-12-10 03:16:17.390165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:23.103 [2024-12-10 03:16:17.390180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.413 ms 00:28:23.103 [2024-12-10 03:16:17.390188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.103 [2024-12-10 03:16:17.415238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.103 [2024-12-10 03:16:17.415281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:23.103 [2024-12-10 03:16:17.415294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.999 ms 00:28:23.103 [2024-12-10 03:16:17.415301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.103 [2024-12-10 03:16:17.440433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.103 [2024-12-10 03:16:17.440481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:23.103 [2024-12-10 03:16:17.440494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.037 ms 00:28:23.103 [2024-12-10 03:16:17.440501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.103 [2024-12-10 03:16:17.440549] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:23.103 [2024-12-10 03:16:17.440565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:23.103 [2024-12-10 03:16:17.440577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:23.103 [2024-12-10 03:16:17.440585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:23.103 [2024-12-10 03:16:17.440989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.440997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:23.104 [2024-12-10 03:16:17.441349] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:23.104 [2024-12-10 03:16:17.441357] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 201fe23d-9e7b-4c75-b1d6-41bdd989f7c9 00:28:23.104 [2024-12-10 03:16:17.441365] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:23.104 [2024-12-10 03:16:17.441373] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 165312 00:28:23.104 [2024-12-10 03:16:17.441411] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 163328 00:28:23.104 [2024-12-10 03:16:17.441421] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0121 00:28:23.104 [2024-12-10 03:16:17.441429] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:23.104 [2024-12-10 03:16:17.441445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:23.104 [2024-12-10 03:16:17.441453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:23.104 [2024-12-10 03:16:17.441460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:23.104 [2024-12-10 03:16:17.441467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:23.104 [2024-12-10 03:16:17.441475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.104 [2024-12-10 03:16:17.441484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:23.104 [2024-12-10 03:16:17.441493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:28:23.104 [2024-12-10 03:16:17.441501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.104 [2024-12-10 03:16:17.455258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.104 [2024-12-10 03:16:17.455298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:23.104 [2024-12-10 03:16:17.455310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.736 ms 00:28:23.104 [2024-12-10 03:16:17.455318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.104 [2024-12-10 03:16:17.455760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.104 [2024-12-10 03:16:17.455778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:23.104 [2024-12-10 03:16:17.455788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:28:23.104 [2024-12-10 03:16:17.455796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.366 [2024-12-10 03:16:17.492245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.366 [2024-12-10 03:16:17.492293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:23.366 [2024-12-10 03:16:17.492305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.366 [2024-12-10 03:16:17.492314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.366 [2024-12-10 03:16:17.492405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.366 [2024-12-10 03:16:17.492415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:23.366 [2024-12-10 03:16:17.492425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.366 [2024-12-10 03:16:17.492434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.366 [2024-12-10 03:16:17.492534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.366 [2024-12-10 03:16:17.492545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:23.366 [2024-12-10 03:16:17.492554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.366 [2024-12-10 03:16:17.492562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.366 [2024-12-10 03:16:17.492579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.366 [2024-12-10 03:16:17.492589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:23.366 [2024-12-10 03:16:17.492597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.366 [2024-12-10 03:16:17.492605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.366 [2024-12-10 03:16:17.576925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.366 [2024-12-10 03:16:17.577172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:23.366 [2024-12-10 03:16:17.577195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.366 [2024-12-10 03:16:17.577205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.366 [2024-12-10 03:16:17.646597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.366 [2024-12-10 03:16:17.646650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:23.366 [2024-12-10 03:16:17.646663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.366 [2024-12-10 03:16:17.646672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.366 [2024-12-10 03:16:17.646734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.366 [2024-12-10 03:16:17.646752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:23.366 [2024-12-10 03:16:17.646761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.366 [2024-12-10 03:16:17.646770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.366 [2024-12-10 03:16:17.646829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.366 [2024-12-10 03:16:17.646839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:23.366 [2024-12-10 03:16:17.646848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.366 [2024-12-10 03:16:17.646857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.366 [2024-12-10 03:16:17.646956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.366 [2024-12-10 03:16:17.646968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:23.366 [2024-12-10 03:16:17.646980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.366 [2024-12-10 03:16:17.646989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.366 [2024-12-10 03:16:17.647024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.366 [2024-12-10 03:16:17.647033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:23.366 [2024-12-10 03:16:17.647042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.366 [2024-12-10 03:16:17.647050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.366 [2024-12-10 03:16:17.647096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.366 [2024-12-10 03:16:17.647106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:23.366 [2024-12-10 03:16:17.647118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.366 [2024-12-10 03:16:17.647127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.366 [2024-12-10 03:16:17.647177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.366 [2024-12-10 03:16:17.647188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:23.366 [2024-12-10 03:16:17.647197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.366 [2024-12-10 03:16:17.647206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.366 [2024-12-10 03:16:17.647345] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.388 ms, result 0 00:28:24.310 00:28:24.310 00:28:24.310 03:16:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:26.230 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:26.230 03:16:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:26.230 [2024-12-10 03:16:20.488004] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:28:26.230 [2024-12-10 03:16:20.488093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82258 ] 00:28:26.492 [2024-12-10 03:16:20.642333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.492 [2024-12-10 03:16:20.743441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.752 [2024-12-10 03:16:21.042602] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:26.752 [2024-12-10 03:16:21.042687] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:27.013 [2024-12-10 03:16:21.201495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.013 [2024-12-10 03:16:21.201537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:27.013 [2024-12-10 03:16:21.201550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:27.013 [2024-12-10 03:16:21.201557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.013 [2024-12-10 03:16:21.201603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.013 [2024-12-10 03:16:21.201616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:27.013 [2024-12-10 03:16:21.201624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:27.013 [2024-12-10 03:16:21.201632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.013 [2024-12-10 03:16:21.201647] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:27.013 [2024-12-10 03:16:21.202362] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:27.013 [2024-12-10 03:16:21.202399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.013 [2024-12-10 03:16:21.202407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:27.013 [2024-12-10 03:16:21.202416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:28:27.013 [2024-12-10 03:16:21.202423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.013 [2024-12-10 03:16:21.203484] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:27.013 [2024-12-10 03:16:21.215986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.013 [2024-12-10 03:16:21.216017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:27.013 [2024-12-10 03:16:21.216028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.503 ms 00:28:27.013 [2024-12-10 03:16:21.216037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.013 [2024-12-10 03:16:21.216093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.013 [2024-12-10 03:16:21.216103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:27.013 [2024-12-10 03:16:21.216111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:27.013 [2024-12-10 03:16:21.216118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.013 [2024-12-10 03:16:21.220906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.013 [2024-12-10 03:16:21.220933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:27.013 [2024-12-10 03:16:21.220943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.740 ms 00:28:27.013 [2024-12-10 03:16:21.220955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.013 [2024-12-10 03:16:21.221020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.013 [2024-12-10 03:16:21.221029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:27.013 [2024-12-10 03:16:21.221036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:27.013 [2024-12-10 03:16:21.221043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.013 [2024-12-10 03:16:21.221088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.013 [2024-12-10 03:16:21.221099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:27.013 [2024-12-10 03:16:21.221106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:27.013 [2024-12-10 03:16:21.221114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.013 [2024-12-10 03:16:21.221136] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:27.013 [2024-12-10 03:16:21.224347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.014 [2024-12-10 03:16:21.224486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:27.014 [2024-12-10 03:16:21.224507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.215 ms 00:28:27.014 [2024-12-10 03:16:21.224514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.014 [2024-12-10 03:16:21.224547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.014 [2024-12-10 03:16:21.224555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:27.014 [2024-12-10 03:16:21.224562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:27.014 [2024-12-10 03:16:21.224569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.014 [2024-12-10 03:16:21.224589] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:27.014 [2024-12-10 03:16:21.224607] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:27.014 [2024-12-10 03:16:21.224641] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:27.014 [2024-12-10 03:16:21.224658] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:27.014 [2024-12-10 03:16:21.224759] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:27.014 [2024-12-10 03:16:21.224769] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:27.014 [2024-12-10 03:16:21.224779] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:27.014 [2024-12-10 03:16:21.224789] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:27.014 [2024-12-10 03:16:21.224797] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:27.014 [2024-12-10 03:16:21.224805] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:27.014 [2024-12-10 03:16:21.224812] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:27.014 [2024-12-10 03:16:21.224821] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:27.014 [2024-12-10 03:16:21.224828] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:27.014 [2024-12-10 03:16:21.224836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.014 [2024-12-10 03:16:21.224843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:27.014 [2024-12-10 03:16:21.224850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:28:27.014 [2024-12-10 03:16:21.224857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.014 [2024-12-10 03:16:21.224938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.014 [2024-12-10 03:16:21.224946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:27.014 [2024-12-10 03:16:21.224953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:27.014 [2024-12-10 03:16:21.224960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.014 [2024-12-10 03:16:21.225072] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:27.014 [2024-12-10 03:16:21.225082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:27.014 [2024-12-10 03:16:21.225090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:27.014 [2024-12-10 03:16:21.225097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.014 [2024-12-10 03:16:21.225105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:27.014 [2024-12-10 03:16:21.225111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:27.014 [2024-12-10 03:16:21.225118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:27.014 [2024-12-10 03:16:21.225124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:27.014 [2024-12-10 03:16:21.225132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:27.014 [2024-12-10 03:16:21.225138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:27.014 [2024-12-10 03:16:21.225145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:27.014 [2024-12-10 03:16:21.225152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:27.014 [2024-12-10 03:16:21.225159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:27.014 [2024-12-10 03:16:21.225171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:27.014 [2024-12-10 03:16:21.225178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:27.014 [2024-12-10 03:16:21.225184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.014 [2024-12-10 03:16:21.225191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:27.014 [2024-12-10 03:16:21.225197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:27.014 [2024-12-10 03:16:21.225204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.014 [2024-12-10 03:16:21.225210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:27.014 [2024-12-10 03:16:21.225217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:27.014 [2024-12-10 03:16:21.225223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.014 [2024-12-10 03:16:21.225229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:27.014 [2024-12-10 03:16:21.225236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:27.014 [2024-12-10 03:16:21.225242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.014 [2024-12-10 03:16:21.225248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:27.014 [2024-12-10 03:16:21.225255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:27.014 [2024-12-10 03:16:21.225261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.014 [2024-12-10 03:16:21.225267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:27.014 [2024-12-10 03:16:21.225274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:27.014 [2024-12-10 03:16:21.225280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.014 [2024-12-10 03:16:21.225286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:27.014 [2024-12-10 03:16:21.225293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:27.014 [2024-12-10 03:16:21.225299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:27.014 [2024-12-10 03:16:21.225306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:27.014 [2024-12-10 03:16:21.225312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:27.014 [2024-12-10 03:16:21.225318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:27.014 [2024-12-10 03:16:21.225325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:27.014 [2024-12-10 03:16:21.225331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:27.014 [2024-12-10 03:16:21.225337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.014 [2024-12-10 03:16:21.225343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:27.014 [2024-12-10 03:16:21.225349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:27.014 [2024-12-10 03:16:21.225355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.014 [2024-12-10 03:16:21.225362] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:27.014 [2024-12-10 03:16:21.225371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:27.014 [2024-12-10 03:16:21.225390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:27.014 [2024-12-10 03:16:21.225397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.014 [2024-12-10 03:16:21.225404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:27.014 [2024-12-10 03:16:21.225411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:27.014 [2024-12-10 03:16:21.225417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:27.014 [2024-12-10 03:16:21.225424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:27.014 [2024-12-10 03:16:21.225430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:27.014 [2024-12-10 03:16:21.225437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:27.014 [2024-12-10 03:16:21.225445] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:27.014 [2024-12-10 03:16:21.225454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:27.014 [2024-12-10 03:16:21.225465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:27.014 [2024-12-10 03:16:21.225472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:27.014 [2024-12-10 03:16:21.225479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:27.014 [2024-12-10 03:16:21.225486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:27.014 [2024-12-10 03:16:21.225492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:27.014 [2024-12-10 03:16:21.225499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:27.014 [2024-12-10 03:16:21.225506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:27.014 [2024-12-10 03:16:21.225512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:27.014 [2024-12-10 03:16:21.225519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:27.014 [2024-12-10 03:16:21.225526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:27.014 [2024-12-10 03:16:21.225534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:27.014 [2024-12-10 03:16:21.225540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:27.014 [2024-12-10 03:16:21.225547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:27.014 [2024-12-10 03:16:21.225554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:27.014 [2024-12-10 03:16:21.225561] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:27.014 [2024-12-10 03:16:21.225568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:27.014 [2024-12-10 03:16:21.225577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:27.014 [2024-12-10 03:16:21.225583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:27.015 [2024-12-10 03:16:21.225590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:27.015 [2024-12-10 03:16:21.225597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:27.015 [2024-12-10 03:16:21.225605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.015 [2024-12-10 03:16:21.225614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:27.015 [2024-12-10 03:16:21.225621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:28:27.015 [2024-12-10 03:16:21.225628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.015 [2024-12-10 03:16:21.251531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.015 [2024-12-10 03:16:21.251657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:27.015 [2024-12-10 03:16:21.251673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.860 ms 00:28:27.015 [2024-12-10 03:16:21.251686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.015 [2024-12-10 03:16:21.251768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.015 [2024-12-10 03:16:21.251776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:27.015 [2024-12-10 03:16:21.251784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:28:27.015 [2024-12-10 03:16:21.251791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.015 [2024-12-10 03:16:21.295489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.015 [2024-12-10 03:16:21.295526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:27.015 [2024-12-10 03:16:21.295538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.647 ms 00:28:27.015 [2024-12-10 03:16:21.295546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.015 [2024-12-10 03:16:21.295583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.015 [2024-12-10 03:16:21.295592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:27.015 [2024-12-10 03:16:21.295603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:27.015 [2024-12-10 03:16:21.295611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.015 [2024-12-10 03:16:21.296003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.015 [2024-12-10 03:16:21.296019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:27.015 [2024-12-10 03:16:21.296028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:28:27.015 [2024-12-10 03:16:21.296035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.015 [2024-12-10 03:16:21.296157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.015 [2024-12-10 03:16:21.296166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:27.015 [2024-12-10 03:16:21.296178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:27.015 [2024-12-10 03:16:21.296186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.015 [2024-12-10 03:16:21.309461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.015 [2024-12-10 03:16:21.309492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:27.015 [2024-12-10 03:16:21.309502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.256 ms 00:28:27.015 [2024-12-10 03:16:21.309509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.015 [2024-12-10 03:16:21.322322] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:27.015 [2024-12-10 03:16:21.322354] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:27.015 [2024-12-10 03:16:21.322366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.015 [2024-12-10 03:16:21.322373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:27.015 [2024-12-10 03:16:21.322395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.770 ms 00:28:27.015 [2024-12-10 03:16:21.322402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.015 [2024-12-10 03:16:21.351436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.015 [2024-12-10 03:16:21.351578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:27.015 [2024-12-10 03:16:21.351594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.997 ms 00:28:27.015 [2024-12-10 03:16:21.351603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.015 [2024-12-10 03:16:21.363398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.015 [2024-12-10 03:16:21.363428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:27.015 [2024-12-10 03:16:21.363438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.751 ms 00:28:27.015 [2024-12-10 03:16:21.363445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.015 [2024-12-10 03:16:21.375085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.015 [2024-12-10 03:16:21.375204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:27.015 [2024-12-10 03:16:21.375220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.605 ms 00:28:27.015 [2024-12-10 03:16:21.375227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.015 [2024-12-10 03:16:21.375829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.015 [2024-12-10 03:16:21.375849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:27.015 [2024-12-10 03:16:21.375860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:28:27.015 [2024-12-10 03:16:21.375868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.277 [2024-12-10 03:16:21.432436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.277 [2024-12-10 03:16:21.432478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:27.277 [2024-12-10 03:16:21.432496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.552 ms 00:28:27.277 [2024-12-10 03:16:21.432504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.277 [2024-12-10 03:16:21.442909] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:27.277 [2024-12-10 03:16:21.445146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.277 [2024-12-10 03:16:21.445176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:27.277 [2024-12-10 03:16:21.445188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.603 ms 00:28:27.277 [2024-12-10 03:16:21.445197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.277 [2024-12-10 03:16:21.445280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.277 [2024-12-10 03:16:21.445292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:27.277 [2024-12-10 03:16:21.445304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:27.277 [2024-12-10 03:16:21.445311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.277 [2024-12-10 03:16:21.445932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.277 [2024-12-10 03:16:21.445963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:27.277 [2024-12-10 03:16:21.445973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:28:27.277 [2024-12-10 03:16:21.445981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.277 [2024-12-10 03:16:21.446003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.277 [2024-12-10 03:16:21.446012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:27.277 [2024-12-10 03:16:21.446019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:27.277 [2024-12-10 03:16:21.446027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.277 [2024-12-10 03:16:21.446061] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:27.277 [2024-12-10 03:16:21.446071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.277 [2024-12-10 03:16:21.446079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:27.277 [2024-12-10 03:16:21.446087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:27.277 [2024-12-10 03:16:21.446094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.277 [2024-12-10 03:16:21.469554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.277 [2024-12-10 03:16:21.469587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:27.277 [2024-12-10 03:16:21.469603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.444 ms 00:28:27.277 [2024-12-10 03:16:21.469611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.277 [2024-12-10 03:16:21.469681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.277 [2024-12-10 03:16:21.469690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:27.277 [2024-12-10 03:16:21.469698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:27.277 [2024-12-10 03:16:21.469705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.277 [2024-12-10 03:16:21.470637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 268.709 ms, result 0 00:28:28.666  [2024-12-10T03:16:24.001Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-10T03:16:24.945Z] Copying: 38/1024 [MB] (18 MBps) [2024-12-10T03:16:25.890Z] Copying: 55/1024 [MB] (17 MBps) [2024-12-10T03:16:26.834Z] Copying: 80/1024 [MB] (25 MBps) [2024-12-10T03:16:27.778Z] Copying: 107/1024 [MB] (26 MBps) [2024-12-10T03:16:28.722Z] Copying: 123/1024 [MB] (16 MBps) [2024-12-10T03:16:29.666Z] Copying: 144/1024 [MB] (21 MBps) [2024-12-10T03:16:31.054Z] Copying: 165/1024 [MB] (20 MBps) [2024-12-10T03:16:31.996Z] Copying: 183/1024 [MB] (18 MBps) [2024-12-10T03:16:32.940Z] Copying: 204/1024 [MB] (21 MBps) [2024-12-10T03:16:33.884Z] Copying: 219/1024 [MB] (15 MBps) [2024-12-10T03:16:34.825Z] Copying: 237/1024 [MB] (17 MBps) [2024-12-10T03:16:35.763Z] Copying: 258/1024 [MB] (20 MBps) [2024-12-10T03:16:36.703Z] Copying: 268/1024 [MB] (10 MBps) [2024-12-10T03:16:38.091Z] Copying: 279/1024 [MB] (10 MBps) [2024-12-10T03:16:38.664Z] Copying: 296/1024 [MB] (17 MBps) [2024-12-10T03:16:40.053Z] Copying: 308/1024 [MB] (11 MBps) [2024-12-10T03:16:40.998Z] Copying: 319/1024 [MB] (11 MBps) [2024-12-10T03:16:41.989Z] Copying: 330/1024 [MB] (10 MBps) [2024-12-10T03:16:42.986Z] Copying: 341/1024 [MB] (11 MBps) [2024-12-10T03:16:43.931Z] Copying: 352/1024 [MB] (11 MBps) [2024-12-10T03:16:44.874Z] Copying: 363/1024 [MB] (10 MBps) [2024-12-10T03:16:45.817Z] Copying: 374/1024 [MB] (10 MBps) [2024-12-10T03:16:46.759Z] Copying: 384/1024 [MB] (10 MBps) [2024-12-10T03:16:47.702Z] Copying: 395/1024 [MB] (10 MBps) [2024-12-10T03:16:49.091Z] Copying: 406/1024 [MB] (10 MBps) [2024-12-10T03:16:49.659Z] Copying: 416/1024 [MB] (10 MBps) [2024-12-10T03:16:51.048Z] Copying: 428/1024 [MB] (11 MBps) [2024-12-10T03:16:51.990Z] Copying: 445/1024 [MB] (16 MBps) [2024-12-10T03:16:52.934Z] Copying: 460/1024 [MB] (15 MBps) [2024-12-10T03:16:53.880Z] Copying: 474/1024 [MB] (13 MBps) [2024-12-10T03:16:54.822Z] Copying: 489/1024 [MB] (15 MBps) [2024-12-10T03:16:55.767Z] Copying: 508/1024 [MB] (18 MBps) [2024-12-10T03:16:56.776Z] Copying: 523/1024 [MB] (14 MBps) [2024-12-10T03:16:57.720Z] Copying: 535/1024 [MB] (12 MBps) [2024-12-10T03:16:58.662Z] Copying: 546/1024 [MB] (10 MBps) [2024-12-10T03:17:00.051Z] Copying: 557/1024 [MB] (10 MBps) [2024-12-10T03:17:00.995Z] Copying: 568/1024 [MB] (10 MBps) [2024-12-10T03:17:01.939Z] Copying: 578/1024 [MB] (10 MBps) [2024-12-10T03:17:02.882Z] Copying: 589/1024 [MB] (10 MBps) [2024-12-10T03:17:03.823Z] Copying: 600/1024 [MB] (11 MBps) [2024-12-10T03:17:04.769Z] Copying: 611/1024 [MB] (11 MBps) [2024-12-10T03:17:05.715Z] Copying: 622/1024 [MB] (10 MBps) [2024-12-10T03:17:06.658Z] Copying: 633/1024 [MB] (11 MBps) [2024-12-10T03:17:08.046Z] Copying: 655/1024 [MB] (22 MBps) [2024-12-10T03:17:08.991Z] Copying: 669/1024 [MB] (13 MBps) [2024-12-10T03:17:09.937Z] Copying: 686/1024 [MB] (17 MBps) [2024-12-10T03:17:10.882Z] Copying: 701/1024 [MB] (15 MBps) [2024-12-10T03:17:11.836Z] Copying: 717/1024 [MB] (15 MBps) [2024-12-10T03:17:12.780Z] Copying: 729/1024 [MB] (12 MBps) [2024-12-10T03:17:13.725Z] Copying: 743/1024 [MB] (13 MBps) [2024-12-10T03:17:14.667Z] Copying: 757/1024 [MB] (14 MBps) [2024-12-10T03:17:16.053Z] Copying: 772/1024 [MB] (14 MBps) [2024-12-10T03:17:16.999Z] Copying: 791/1024 [MB] (19 MBps) [2024-12-10T03:17:17.944Z] Copying: 806/1024 [MB] (15 MBps) [2024-12-10T03:17:18.888Z] Copying: 823/1024 [MB] (16 MBps) [2024-12-10T03:17:19.834Z] Copying: 836/1024 [MB] (12 MBps) [2024-12-10T03:17:20.778Z] Copying: 849/1024 [MB] (13 MBps) [2024-12-10T03:17:21.722Z] Copying: 861/1024 [MB] (11 MBps) [2024-12-10T03:17:22.666Z] Copying: 878/1024 [MB] (16 MBps) [2024-12-10T03:17:24.051Z] Copying: 893/1024 [MB] (15 MBps) [2024-12-10T03:17:25.065Z] Copying: 906/1024 [MB] (12 MBps) [2024-12-10T03:17:26.007Z] Copying: 917/1024 [MB] (11 MBps) [2024-12-10T03:17:26.952Z] Copying: 928/1024 [MB] (10 MBps) [2024-12-10T03:17:27.897Z] Copying: 939/1024 [MB] (10 MBps) [2024-12-10T03:17:28.842Z] Copying: 950/1024 [MB] (10 MBps) [2024-12-10T03:17:29.786Z] Copying: 960/1024 [MB] (10 MBps) [2024-12-10T03:17:30.727Z] Copying: 970/1024 [MB] (10 MBps) [2024-12-10T03:17:31.669Z] Copying: 981/1024 [MB] (10 MBps) [2024-12-10T03:17:33.052Z] Copying: 998/1024 [MB] (16 MBps) [2024-12-10T03:17:33.994Z] Copying: 1008/1024 [MB] (10 MBps) [2024-12-10T03:17:34.254Z] Copying: 1019/1024 [MB] (10 MBps) [2024-12-10T03:17:34.254Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-10 03:17:34.132073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.866 [2024-12-10 03:17:34.132161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:39.867 [2024-12-10 03:17:34.132181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:39.867 [2024-12-10 03:17:34.132192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.867 [2024-12-10 03:17:34.132220] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:39.867 [2024-12-10 03:17:34.137728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.867 [2024-12-10 03:17:34.137797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:39.867 [2024-12-10 03:17:34.137815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.485 ms 00:29:39.867 [2024-12-10 03:17:34.137828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.867 [2024-12-10 03:17:34.138195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.867 [2024-12-10 03:17:34.138211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:39.867 [2024-12-10 03:17:34.138225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:29:39.867 [2024-12-10 03:17:34.138238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.867 [2024-12-10 03:17:34.143856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.867 [2024-12-10 03:17:34.143891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:39.867 [2024-12-10 03:17:34.143905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.596 ms 00:29:39.867 [2024-12-10 03:17:34.143958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.867 [2024-12-10 03:17:34.150398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.867 [2024-12-10 03:17:34.150440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:39.867 [2024-12-10 03:17:34.150451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.414 ms 00:29:39.867 [2024-12-10 03:17:34.150460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.867 [2024-12-10 03:17:34.177947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.867 [2024-12-10 03:17:34.178164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:39.867 [2024-12-10 03:17:34.178187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.414 ms 00:29:39.867 [2024-12-10 03:17:34.178195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.867 [2024-12-10 03:17:34.195486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.867 [2024-12-10 03:17:34.195536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:39.867 [2024-12-10 03:17:34.195549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.216 ms 00:29:39.867 [2024-12-10 03:17:34.195557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.867 [2024-12-10 03:17:34.200089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.867 [2024-12-10 03:17:34.200138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:39.867 [2024-12-10 03:17:34.200150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.465 ms 00:29:39.867 [2024-12-10 03:17:34.200158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.867 [2024-12-10 03:17:34.227201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.867 [2024-12-10 03:17:34.227250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:39.867 [2024-12-10 03:17:34.227263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.026 ms 00:29:39.867 [2024-12-10 03:17:34.227271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.129 [2024-12-10 03:17:34.254050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.129 [2024-12-10 03:17:34.254096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:40.129 [2024-12-10 03:17:34.254107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.726 ms 00:29:40.129 [2024-12-10 03:17:34.254115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.129 [2024-12-10 03:17:34.280202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.129 [2024-12-10 03:17:34.280251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:40.129 [2024-12-10 03:17:34.280262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.036 ms 00:29:40.129 [2024-12-10 03:17:34.280270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.129 [2024-12-10 03:17:34.306408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.129 [2024-12-10 03:17:34.306455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:40.129 [2024-12-10 03:17:34.306467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.042 ms 00:29:40.129 [2024-12-10 03:17:34.306476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.129 [2024-12-10 03:17:34.306527] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:40.129 [2024-12-10 03:17:34.306551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:40.129 [2024-12-10 03:17:34.306566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:40.129 [2024-12-10 03:17:34.306574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.306994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.307002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.307009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.307017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.307024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.307031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.307038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.307047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.307056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.307064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.307072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:40.129 [2024-12-10 03:17:34.307080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:40.130 [2024-12-10 03:17:34.307343] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:40.130 [2024-12-10 03:17:34.307351] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 201fe23d-9e7b-4c75-b1d6-41bdd989f7c9 00:29:40.130 [2024-12-10 03:17:34.307360] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:40.130 [2024-12-10 03:17:34.307367] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:40.130 [2024-12-10 03:17:34.307398] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:40.130 [2024-12-10 03:17:34.307408] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:40.130 [2024-12-10 03:17:34.307423] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:40.130 [2024-12-10 03:17:34.307431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:40.130 [2024-12-10 03:17:34.307439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:40.130 [2024-12-10 03:17:34.307446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:40.130 [2024-12-10 03:17:34.307453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:40.130 [2024-12-10 03:17:34.307461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.130 [2024-12-10 03:17:34.307471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:40.130 [2024-12-10 03:17:34.307480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:29:40.130 [2024-12-10 03:17:34.307491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.130 [2024-12-10 03:17:34.321063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.130 [2024-12-10 03:17:34.321108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:40.130 [2024-12-10 03:17:34.321120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.536 ms 00:29:40.130 [2024-12-10 03:17:34.321128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.130 [2024-12-10 03:17:34.321552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.130 [2024-12-10 03:17:34.321576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:40.130 [2024-12-10 03:17:34.321586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:29:40.130 [2024-12-10 03:17:34.321593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.130 [2024-12-10 03:17:34.358234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.130 [2024-12-10 03:17:34.358285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:40.130 [2024-12-10 03:17:34.358298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.130 [2024-12-10 03:17:34.358306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.130 [2024-12-10 03:17:34.358399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.130 [2024-12-10 03:17:34.358416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:40.130 [2024-12-10 03:17:34.358425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.130 [2024-12-10 03:17:34.358433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.130 [2024-12-10 03:17:34.358525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.130 [2024-12-10 03:17:34.358537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:40.130 [2024-12-10 03:17:34.358545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.130 [2024-12-10 03:17:34.358556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.130 [2024-12-10 03:17:34.358577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.130 [2024-12-10 03:17:34.358590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:40.130 [2024-12-10 03:17:34.358606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.130 [2024-12-10 03:17:34.358617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.130 [2024-12-10 03:17:34.444467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.130 [2024-12-10 03:17:34.444689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:40.130 [2024-12-10 03:17:34.444713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.130 [2024-12-10 03:17:34.444722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.391 [2024-12-10 03:17:34.515260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.391 [2024-12-10 03:17:34.515322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:40.391 [2024-12-10 03:17:34.515335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.391 [2024-12-10 03:17:34.515343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.391 [2024-12-10 03:17:34.515424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.391 [2024-12-10 03:17:34.515436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:40.391 [2024-12-10 03:17:34.515445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.391 [2024-12-10 03:17:34.515453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.391 [2024-12-10 03:17:34.515510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.391 [2024-12-10 03:17:34.515546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:40.391 [2024-12-10 03:17:34.515554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.391 [2024-12-10 03:17:34.515571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.391 [2024-12-10 03:17:34.515703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.391 [2024-12-10 03:17:34.515719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:40.391 [2024-12-10 03:17:34.515733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.391 [2024-12-10 03:17:34.515745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.391 [2024-12-10 03:17:34.515796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.391 [2024-12-10 03:17:34.515812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:40.391 [2024-12-10 03:17:34.515834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.391 [2024-12-10 03:17:34.515848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.391 [2024-12-10 03:17:34.515914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.391 [2024-12-10 03:17:34.515950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:40.391 [2024-12-10 03:17:34.515960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.391 [2024-12-10 03:17:34.515968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.391 [2024-12-10 03:17:34.516015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.391 [2024-12-10 03:17:34.516026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:40.391 [2024-12-10 03:17:34.516034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.391 [2024-12-10 03:17:34.516047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.391 [2024-12-10 03:17:34.516182] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.079 ms, result 0 00:29:40.959 00:29:40.959 00:29:40.959 03:17:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:43.495 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:43.495 03:17:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:43.495 03:17:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:43.495 03:17:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:43.495 03:17:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:43.495 03:17:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:43.495 03:17:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:43.495 03:17:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:43.495 Process with pid 80469 is not found 00:29:43.495 03:17:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80469 00:29:43.495 03:17:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80469 ']' 00:29:43.495 03:17:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80469 00:29:43.495 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80469) - No such process 00:29:43.495 03:17:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80469 is not found' 00:29:43.495 03:17:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:43.755 Remove shared memory files 00:29:43.755 03:17:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:43.755 03:17:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:43.755 03:17:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:43.755 03:17:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:43.755 03:17:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:43.755 03:17:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:43.755 03:17:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:43.755 ************************************ 00:29:43.755 END TEST ftl_dirty_shutdown 00:29:43.755 ************************************ 00:29:43.755 00:29:43.755 real 4m5.582s 00:29:43.755 user 4m25.084s 00:29:43.755 sys 0m23.189s 00:29:43.755 03:17:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:43.755 03:17:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:43.755 03:17:38 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:43.755 03:17:38 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:43.755 03:17:38 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:43.755 03:17:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:43.755 ************************************ 00:29:43.755 START TEST ftl_upgrade_shutdown 00:29:43.755 ************************************ 00:29:43.755 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:44.015 * Looking for test storage... 00:29:44.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:44.015 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:44.015 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:44.015 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:44.015 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:44.015 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:44.015 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:44.015 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:44.015 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:44.015 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:44.015 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:44.015 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:44.015 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:44.015 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:44.015 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:44.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.016 --rc genhtml_branch_coverage=1 00:29:44.016 --rc genhtml_function_coverage=1 00:29:44.016 --rc genhtml_legend=1 00:29:44.016 --rc geninfo_all_blocks=1 00:29:44.016 --rc geninfo_unexecuted_blocks=1 00:29:44.016 00:29:44.016 ' 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:44.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.016 --rc genhtml_branch_coverage=1 00:29:44.016 --rc genhtml_function_coverage=1 00:29:44.016 --rc genhtml_legend=1 00:29:44.016 --rc geninfo_all_blocks=1 00:29:44.016 --rc geninfo_unexecuted_blocks=1 00:29:44.016 00:29:44.016 ' 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:44.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.016 --rc genhtml_branch_coverage=1 00:29:44.016 --rc genhtml_function_coverage=1 00:29:44.016 --rc genhtml_legend=1 00:29:44.016 --rc geninfo_all_blocks=1 00:29:44.016 --rc geninfo_unexecuted_blocks=1 00:29:44.016 00:29:44.016 ' 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:44.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.016 --rc genhtml_branch_coverage=1 00:29:44.016 --rc genhtml_function_coverage=1 00:29:44.016 --rc genhtml_legend=1 00:29:44.016 --rc geninfo_all_blocks=1 00:29:44.016 --rc geninfo_unexecuted_blocks=1 00:29:44.016 00:29:44.016 ' 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83106 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83106 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83106 ']' 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:44.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.016 03:17:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:44.016 [2024-12-10 03:17:38.356663] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:29:44.016 [2024-12-10 03:17:38.357059] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83106 ] 00:29:44.276 [2024-12-10 03:17:38.522770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.276 [2024-12-10 03:17:38.641224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:45.219 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:45.220 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:45.220 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:45.529 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:45.529 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:45.529 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:45.529 03:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:45.529 03:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:45.529 03:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:45.529 03:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:45.529 03:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:45.529 03:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:45.529 { 00:29:45.529 "name": "basen1", 00:29:45.529 "aliases": [ 00:29:45.529 "0a56bf8e-41e5-4f4c-a257-ea2421d09caf" 00:29:45.529 ], 00:29:45.529 "product_name": "NVMe disk", 00:29:45.529 "block_size": 4096, 00:29:45.529 "num_blocks": 1310720, 00:29:45.529 "uuid": "0a56bf8e-41e5-4f4c-a257-ea2421d09caf", 00:29:45.529 "numa_id": -1, 00:29:45.529 "assigned_rate_limits": { 00:29:45.529 "rw_ios_per_sec": 0, 00:29:45.529 "rw_mbytes_per_sec": 0, 00:29:45.529 "r_mbytes_per_sec": 0, 00:29:45.529 "w_mbytes_per_sec": 0 00:29:45.529 }, 00:29:45.529 "claimed": true, 00:29:45.529 "claim_type": "read_many_write_one", 00:29:45.529 "zoned": false, 00:29:45.529 "supported_io_types": { 00:29:45.529 "read": true, 00:29:45.529 "write": true, 00:29:45.529 "unmap": true, 00:29:45.529 "flush": true, 00:29:45.529 "reset": true, 00:29:45.529 "nvme_admin": true, 00:29:45.529 "nvme_io": true, 00:29:45.529 "nvme_io_md": false, 00:29:45.529 "write_zeroes": true, 00:29:45.529 "zcopy": false, 00:29:45.529 "get_zone_info": false, 00:29:45.529 "zone_management": false, 00:29:45.529 "zone_append": false, 00:29:45.529 "compare": true, 00:29:45.529 "compare_and_write": false, 00:29:45.529 "abort": true, 00:29:45.529 "seek_hole": false, 00:29:45.529 "seek_data": false, 00:29:45.529 "copy": true, 00:29:45.529 "nvme_iov_md": false 00:29:45.529 }, 00:29:45.529 "driver_specific": { 00:29:45.529 "nvme": [ 00:29:45.529 { 00:29:45.529 "pci_address": "0000:00:11.0", 00:29:45.529 "trid": { 00:29:45.529 "trtype": "PCIe", 00:29:45.529 "traddr": "0000:00:11.0" 00:29:45.529 }, 00:29:45.529 "ctrlr_data": { 00:29:45.529 "cntlid": 0, 00:29:45.530 "vendor_id": "0x1b36", 00:29:45.530 "model_number": "QEMU NVMe Ctrl", 00:29:45.530 "serial_number": "12341", 00:29:45.530 "firmware_revision": "8.0.0", 00:29:45.530 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:45.530 "oacs": { 00:29:45.530 "security": 0, 00:29:45.530 "format": 1, 00:29:45.530 "firmware": 0, 00:29:45.530 "ns_manage": 1 00:29:45.530 }, 00:29:45.530 "multi_ctrlr": false, 00:29:45.530 "ana_reporting": false 00:29:45.530 }, 00:29:45.530 "vs": { 00:29:45.530 "nvme_version": "1.4" 00:29:45.530 }, 00:29:45.530 "ns_data": { 00:29:45.530 "id": 1, 00:29:45.530 "can_share": false 00:29:45.530 } 00:29:45.530 } 00:29:45.530 ], 00:29:45.530 "mp_policy": "active_passive" 00:29:45.530 } 00:29:45.530 } 00:29:45.530 ]' 00:29:45.530 03:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:45.530 03:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:45.530 03:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:45.799 03:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:45.799 03:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:45.799 03:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:45.799 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:45.799 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:45.799 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:45.799 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:45.799 03:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:45.799 03:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=41297c5e-30fe-48b6-9311-ebbd4c602642 00:29:45.799 03:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:45.799 03:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 41297c5e-30fe-48b6-9311-ebbd4c602642 00:29:46.059 03:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:46.319 03:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=e86f0806-f682-4445-85fe-b26e3850e5ae 00:29:46.319 03:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u e86f0806-f682-4445-85fe-b26e3850e5ae 00:29:46.579 03:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=0cc31754-d513-4482-8c0f-e4e35452ccc8 00:29:46.579 03:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 0cc31754-d513-4482-8c0f-e4e35452ccc8 ]] 00:29:46.579 03:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 0cc31754-d513-4482-8c0f-e4e35452ccc8 5120 00:29:46.579 03:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:46.579 03:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:46.579 03:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=0cc31754-d513-4482-8c0f-e4e35452ccc8 00:29:46.579 03:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:46.579 03:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 0cc31754-d513-4482-8c0f-e4e35452ccc8 00:29:46.579 03:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=0cc31754-d513-4482-8c0f-e4e35452ccc8 00:29:46.579 03:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:46.580 03:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:46.580 03:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:46.580 03:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0cc31754-d513-4482-8c0f-e4e35452ccc8 00:29:46.840 03:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:46.840 { 00:29:46.840 "name": "0cc31754-d513-4482-8c0f-e4e35452ccc8", 00:29:46.840 "aliases": [ 00:29:46.840 "lvs/basen1p0" 00:29:46.840 ], 00:29:46.840 "product_name": "Logical Volume", 00:29:46.840 "block_size": 4096, 00:29:46.840 "num_blocks": 5242880, 00:29:46.840 "uuid": "0cc31754-d513-4482-8c0f-e4e35452ccc8", 00:29:46.840 "assigned_rate_limits": { 00:29:46.840 "rw_ios_per_sec": 0, 00:29:46.840 "rw_mbytes_per_sec": 0, 00:29:46.840 "r_mbytes_per_sec": 0, 00:29:46.840 "w_mbytes_per_sec": 0 00:29:46.840 }, 00:29:46.840 "claimed": false, 00:29:46.840 "zoned": false, 00:29:46.840 "supported_io_types": { 00:29:46.840 "read": true, 00:29:46.840 "write": true, 00:29:46.840 "unmap": true, 00:29:46.840 "flush": false, 00:29:46.840 "reset": true, 00:29:46.840 "nvme_admin": false, 00:29:46.840 "nvme_io": false, 00:29:46.840 "nvme_io_md": false, 00:29:46.840 "write_zeroes": true, 00:29:46.840 "zcopy": false, 00:29:46.840 "get_zone_info": false, 00:29:46.840 "zone_management": false, 00:29:46.840 "zone_append": false, 00:29:46.840 "compare": false, 00:29:46.840 "compare_and_write": false, 00:29:46.840 "abort": false, 00:29:46.840 "seek_hole": true, 00:29:46.840 "seek_data": true, 00:29:46.840 "copy": false, 00:29:46.840 "nvme_iov_md": false 00:29:46.840 }, 00:29:46.840 "driver_specific": { 00:29:46.840 "lvol": { 00:29:46.840 "lvol_store_uuid": "e86f0806-f682-4445-85fe-b26e3850e5ae", 00:29:46.840 "base_bdev": "basen1", 00:29:46.840 "thin_provision": true, 00:29:46.840 "num_allocated_clusters": 0, 00:29:46.840 "snapshot": false, 00:29:46.840 "clone": false, 00:29:46.840 "esnap_clone": false 00:29:46.840 } 00:29:46.840 } 00:29:46.840 } 00:29:46.840 ]' 00:29:46.840 03:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:46.840 03:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:46.840 03:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:46.840 03:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:46.840 03:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:46.840 03:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:46.840 03:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:46.840 03:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:46.840 03:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:47.100 03:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:47.100 03:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:47.100 03:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:47.360 03:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:47.360 03:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:47.360 03:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 0cc31754-d513-4482-8c0f-e4e35452ccc8 -c cachen1p0 --l2p_dram_limit 2 00:29:47.621 [2024-12-10 03:17:41.799063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.621 [2024-12-10 03:17:41.799103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:47.621 [2024-12-10 03:17:41.799116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:47.621 [2024-12-10 03:17:41.799122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.621 [2024-12-10 03:17:41.799167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.621 [2024-12-10 03:17:41.799175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:47.621 [2024-12-10 03:17:41.799183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:29:47.621 [2024-12-10 03:17:41.799189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.621 [2024-12-10 03:17:41.799205] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:47.621 [2024-12-10 03:17:41.799747] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:47.621 [2024-12-10 03:17:41.799768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.621 [2024-12-10 03:17:41.799774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:47.621 [2024-12-10 03:17:41.799782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.565 ms 00:29:47.621 [2024-12-10 03:17:41.799788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.621 [2024-12-10 03:17:41.799837] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID f683d4f5-edcb-4c8c-955d-dcedeabc8e49 00:29:47.621 [2024-12-10 03:17:41.800792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.621 [2024-12-10 03:17:41.800811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:47.621 [2024-12-10 03:17:41.800819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:47.621 [2024-12-10 03:17:41.800826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.621 [2024-12-10 03:17:41.805521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.621 [2024-12-10 03:17:41.805626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:47.621 [2024-12-10 03:17:41.805672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.664 ms 00:29:47.621 [2024-12-10 03:17:41.805693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.621 [2024-12-10 03:17:41.805734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.621 [2024-12-10 03:17:41.805753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:47.621 [2024-12-10 03:17:41.805802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:47.621 [2024-12-10 03:17:41.805823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.621 [2024-12-10 03:17:41.805869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.621 [2024-12-10 03:17:41.805890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:47.621 [2024-12-10 03:17:41.805907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:47.621 [2024-12-10 03:17:41.805952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.621 [2024-12-10 03:17:41.806006] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:47.621 [2024-12-10 03:17:41.808890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.621 [2024-12-10 03:17:41.808978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:47.621 [2024-12-10 03:17:41.809032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.887 ms 00:29:47.621 [2024-12-10 03:17:41.809051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.621 [2024-12-10 03:17:41.809084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.621 [2024-12-10 03:17:41.809100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:47.621 [2024-12-10 03:17:41.809148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:47.621 [2024-12-10 03:17:41.809166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.621 [2024-12-10 03:17:41.809189] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:47.621 [2024-12-10 03:17:41.809307] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:47.621 [2024-12-10 03:17:41.809410] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:47.621 [2024-12-10 03:17:41.809477] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:47.621 [2024-12-10 03:17:41.809506] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:47.621 [2024-12-10 03:17:41.809530] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:47.621 [2024-12-10 03:17:41.809555] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:47.621 [2024-12-10 03:17:41.809569] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:47.621 [2024-12-10 03:17:41.809588] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:47.621 [2024-12-10 03:17:41.809633] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:47.621 [2024-12-10 03:17:41.809652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.621 [2024-12-10 03:17:41.809668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:47.621 [2024-12-10 03:17:41.809684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.464 ms 00:29:47.621 [2024-12-10 03:17:41.809699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.621 [2024-12-10 03:17:41.809775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.621 [2024-12-10 03:17:41.809832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:47.621 [2024-12-10 03:17:41.809848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:47.621 [2024-12-10 03:17:41.809862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.621 [2024-12-10 03:17:41.809966] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:47.621 [2024-12-10 03:17:41.810017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:47.622 [2024-12-10 03:17:41.810036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:47.622 [2024-12-10 03:17:41.810051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.622 [2024-12-10 03:17:41.810067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:47.622 [2024-12-10 03:17:41.810082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:47.622 [2024-12-10 03:17:41.810117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:47.622 [2024-12-10 03:17:41.810134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:47.622 [2024-12-10 03:17:41.810150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:47.622 [2024-12-10 03:17:41.810164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.622 [2024-12-10 03:17:41.810180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:47.622 [2024-12-10 03:17:41.810194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:47.622 [2024-12-10 03:17:41.810209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.622 [2024-12-10 03:17:41.810247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:47.622 [2024-12-10 03:17:41.810266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:47.622 [2024-12-10 03:17:41.810280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.622 [2024-12-10 03:17:41.810296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:47.622 [2024-12-10 03:17:41.810310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:47.622 [2024-12-10 03:17:41.810326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.622 [2024-12-10 03:17:41.810341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:47.622 [2024-12-10 03:17:41.810356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:47.622 [2024-12-10 03:17:41.810401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:47.622 [2024-12-10 03:17:41.810420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:47.622 [2024-12-10 03:17:41.810434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:47.622 [2024-12-10 03:17:41.810449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:47.622 [2024-12-10 03:17:41.810463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:47.622 [2024-12-10 03:17:41.810479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:47.622 [2024-12-10 03:17:41.810493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:47.622 [2024-12-10 03:17:41.810509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:47.622 [2024-12-10 03:17:41.810523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:47.622 [2024-12-10 03:17:41.810563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:47.622 [2024-12-10 03:17:41.810579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:47.622 [2024-12-10 03:17:41.810616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:47.622 [2024-12-10 03:17:41.810632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.622 [2024-12-10 03:17:41.810689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:47.622 [2024-12-10 03:17:41.810705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:47.622 [2024-12-10 03:17:41.810722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.622 [2024-12-10 03:17:41.810736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:47.622 [2024-12-10 03:17:41.810775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:47.622 [2024-12-10 03:17:41.810791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.622 [2024-12-10 03:17:41.810806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:47.622 [2024-12-10 03:17:41.810820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:47.622 [2024-12-10 03:17:41.810851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.622 [2024-12-10 03:17:41.810867] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:47.622 [2024-12-10 03:17:41.810883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:47.622 [2024-12-10 03:17:41.810898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:47.622 [2024-12-10 03:17:41.811003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.622 [2024-12-10 03:17:41.811020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:47.622 [2024-12-10 03:17:41.811037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:47.622 [2024-12-10 03:17:41.811051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:47.622 [2024-12-10 03:17:41.811067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:47.622 [2024-12-10 03:17:41.811081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:47.622 [2024-12-10 03:17:41.811097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:47.622 [2024-12-10 03:17:41.811217] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:47.622 [2024-12-10 03:17:41.811230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:47.622 [2024-12-10 03:17:41.811237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:47.622 [2024-12-10 03:17:41.811244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:47.622 [2024-12-10 03:17:41.811249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:47.622 [2024-12-10 03:17:41.811256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:47.622 [2024-12-10 03:17:41.811261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:47.622 [2024-12-10 03:17:41.811268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:47.622 [2024-12-10 03:17:41.811274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:47.622 [2024-12-10 03:17:41.811282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:47.622 [2024-12-10 03:17:41.811287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:47.622 [2024-12-10 03:17:41.811295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:47.622 [2024-12-10 03:17:41.811301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:47.622 [2024-12-10 03:17:41.811307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:47.622 [2024-12-10 03:17:41.811312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:47.622 [2024-12-10 03:17:41.811319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:47.622 [2024-12-10 03:17:41.811325] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:47.622 [2024-12-10 03:17:41.811332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:47.622 [2024-12-10 03:17:41.811338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:47.622 [2024-12-10 03:17:41.811345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:47.622 [2024-12-10 03:17:41.811350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:47.622 [2024-12-10 03:17:41.811357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:47.622 [2024-12-10 03:17:41.811364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.622 [2024-12-10 03:17:41.811371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:47.622 [2024-12-10 03:17:41.811386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.452 ms 00:29:47.622 [2024-12-10 03:17:41.811393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.622 [2024-12-10 03:17:41.811425] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:47.622 [2024-12-10 03:17:41.811435] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:50.913 [2024-12-10 03:17:45.110314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.913 [2024-12-10 03:17:45.110496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:50.913 [2024-12-10 03:17:45.110633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3298.875 ms 00:29:50.913 [2024-12-10 03:17:45.110662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.913 [2024-12-10 03:17:45.135777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.913 [2024-12-10 03:17:45.135919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:50.913 [2024-12-10 03:17:45.136120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.903 ms 00:29:50.913 [2024-12-10 03:17:45.136278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.913 [2024-12-10 03:17:45.136363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.913 [2024-12-10 03:17:45.136504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:50.913 [2024-12-10 03:17:45.136533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:50.913 [2024-12-10 03:17:45.136561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.913 [2024-12-10 03:17:45.166854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.913 [2024-12-10 03:17:45.166987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:50.913 [2024-12-10 03:17:45.167052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.168 ms 00:29:50.913 [2024-12-10 03:17:45.167078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.913 [2024-12-10 03:17:45.167116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.913 [2024-12-10 03:17:45.167145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:50.913 [2024-12-10 03:17:45.167165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:50.913 [2024-12-10 03:17:45.167186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.913 [2024-12-10 03:17:45.167541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.913 [2024-12-10 03:17:45.167635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:50.913 [2024-12-10 03:17:45.167694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:29:50.913 [2024-12-10 03:17:45.167719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.913 [2024-12-10 03:17:45.167769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.913 [2024-12-10 03:17:45.167885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:50.913 [2024-12-10 03:17:45.167911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:50.913 [2024-12-10 03:17:45.167956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.913 [2024-12-10 03:17:45.181808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.913 [2024-12-10 03:17:45.181915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:50.913 [2024-12-10 03:17:45.181963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.821 ms 00:29:50.913 [2024-12-10 03:17:45.181987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.913 [2024-12-10 03:17:45.206737] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:50.913 [2024-12-10 03:17:45.207674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.913 [2024-12-10 03:17:45.207768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:50.913 [2024-12-10 03:17:45.207821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.608 ms 00:29:50.913 [2024-12-10 03:17:45.207844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.913 [2024-12-10 03:17:45.233724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.913 [2024-12-10 03:17:45.233841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:50.913 [2024-12-10 03:17:45.233894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.835 ms 00:29:50.913 [2024-12-10 03:17:45.233918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.913 [2024-12-10 03:17:45.233995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.913 [2024-12-10 03:17:45.234022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:50.913 [2024-12-10 03:17:45.234045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:50.913 [2024-12-10 03:17:45.234064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.913 [2024-12-10 03:17:45.257356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.913 [2024-12-10 03:17:45.257471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:50.913 [2024-12-10 03:17:45.257525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.236 ms 00:29:50.913 [2024-12-10 03:17:45.257549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.913 [2024-12-10 03:17:45.280475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.913 [2024-12-10 03:17:45.280581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:50.913 [2024-12-10 03:17:45.280599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.888 ms 00:29:50.913 [2024-12-10 03:17:45.280607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.913 [2024-12-10 03:17:45.281136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.913 [2024-12-10 03:17:45.281150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:50.913 [2024-12-10 03:17:45.281161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.507 ms 00:29:50.913 [2024-12-10 03:17:45.281170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.170 [2024-12-10 03:17:45.354440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.170 [2024-12-10 03:17:45.354580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:51.170 [2024-12-10 03:17:45.354606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.234 ms 00:29:51.170 [2024-12-10 03:17:45.354615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.170 [2024-12-10 03:17:45.379333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.170 [2024-12-10 03:17:45.379372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:51.170 [2024-12-10 03:17:45.379398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.647 ms 00:29:51.170 [2024-12-10 03:17:45.379406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.170 [2024-12-10 03:17:45.403217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.170 [2024-12-10 03:17:45.403253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:51.170 [2024-12-10 03:17:45.403265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.773 ms 00:29:51.170 [2024-12-10 03:17:45.403272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.170 [2024-12-10 03:17:45.426745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.170 [2024-12-10 03:17:45.426777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:51.170 [2024-12-10 03:17:45.426790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.437 ms 00:29:51.170 [2024-12-10 03:17:45.426797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.170 [2024-12-10 03:17:45.426835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.170 [2024-12-10 03:17:45.426844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:51.170 [2024-12-10 03:17:45.426856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:51.170 [2024-12-10 03:17:45.426864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.170 [2024-12-10 03:17:45.426937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.170 [2024-12-10 03:17:45.426948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:51.170 [2024-12-10 03:17:45.426957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:29:51.170 [2024-12-10 03:17:45.426965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.170 [2024-12-10 03:17:45.427784] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3628.294 ms, result 0 00:29:51.170 { 00:29:51.170 "name": "ftl", 00:29:51.170 "uuid": "f683d4f5-edcb-4c8c-955d-dcedeabc8e49" 00:29:51.170 } 00:29:51.170 03:17:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:51.427 [2024-12-10 03:17:45.599193] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.427 03:17:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:51.685 03:17:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:51.685 [2024-12-10 03:17:45.991576] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:51.685 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:51.942 [2024-12-10 03:17:46.151886] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:51.942 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:52.200 Fill FTL, iteration 1 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83228 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83228 /var/tmp/spdk.tgt.sock 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83228 ']' 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:52.200 03:17:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:52.201 03:17:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:52.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:52.201 03:17:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:52.201 03:17:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:52.201 [2024-12-10 03:17:46.578762] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:29:52.201 [2024-12-10 03:17:46.579045] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83228 ] 00:29:52.459 [2024-12-10 03:17:46.738176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.459 [2024-12-10 03:17:46.833025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.024 03:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.024 03:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:53.024 03:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:53.590 ftln1 00:29:53.590 03:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:53.590 03:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:53.590 03:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:53.590 03:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83228 00:29:53.590 03:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83228 ']' 00:29:53.590 03:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83228 00:29:53.590 03:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:53.590 03:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.590 03:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83228 00:29:53.590 killing process with pid 83228 00:29:53.590 03:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:53.590 03:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:53.590 03:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83228' 00:29:53.590 03:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83228 00:29:53.590 03:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83228 00:29:54.965 03:17:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:54.965 03:17:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:54.965 [2024-12-10 03:17:49.322270] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:29:54.965 [2024-12-10 03:17:49.322405] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83270 ] 00:29:55.223 [2024-12-10 03:17:49.477566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.223 [2024-12-10 03:17:49.551843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.595  [2024-12-10T03:17:51.916Z] Copying: 261/1024 [MB] (261 MBps) [2024-12-10T03:17:53.298Z] Copying: 507/1024 [MB] (246 MBps) [2024-12-10T03:17:53.863Z] Copying: 736/1024 [MB] (229 MBps) [2024-12-10T03:17:54.121Z] Copying: 996/1024 [MB] (260 MBps) [2024-12-10T03:17:54.690Z] Copying: 1024/1024 [MB] (average 248 MBps) 00:30:00.302 00:30:00.302 03:17:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:00.302 Calculate MD5 checksum, iteration 1 00:30:00.302 03:17:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:00.302 03:17:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:00.302 03:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:00.302 03:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:00.302 03:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:00.302 03:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:00.303 03:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:00.303 [2024-12-10 03:17:54.621320] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:00.303 [2024-12-10 03:17:54.621569] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83330 ] 00:30:00.564 [2024-12-10 03:17:54.777617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.564 [2024-12-10 03:17:54.854373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.954  [2024-12-10T03:17:56.908Z] Copying: 655/1024 [MB] (655 MBps) [2024-12-10T03:17:57.474Z] Copying: 1024/1024 [MB] (average 648 MBps) 00:30:03.086 00:30:03.086 03:17:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:03.086 03:17:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:04.987 03:17:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:04.987 03:17:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a876db715c4e08991cfa7db56c6c39d3 00:30:04.987 03:17:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:04.987 03:17:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:04.987 Fill FTL, iteration 2 00:30:04.987 03:17:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:04.987 03:17:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:04.987 03:17:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:04.987 03:17:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:04.987 03:17:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:04.987 03:17:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:04.987 03:17:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:04.987 [2024-12-10 03:17:59.253739] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:04.987 [2024-12-10 03:17:59.253851] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83379 ] 00:30:05.247 [2024-12-10 03:17:59.414619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.247 [2024-12-10 03:17:59.507272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.620  [2024-12-10T03:18:01.946Z] Copying: 176/1024 [MB] (176 MBps) [2024-12-10T03:18:02.883Z] Copying: 425/1024 [MB] (249 MBps) [2024-12-10T03:18:04.261Z] Copying: 678/1024 [MB] (253 MBps) [2024-12-10T03:18:04.261Z] Copying: 927/1024 [MB] (249 MBps) [2024-12-10T03:18:04.828Z] Copying: 1024/1024 [MB] (average 232 MBps) 00:30:10.440 00:30:10.699 Calculate MD5 checksum, iteration 2 00:30:10.699 03:18:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:10.699 03:18:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:10.699 03:18:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:10.699 03:18:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:10.699 03:18:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:10.699 03:18:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:10.699 03:18:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:10.699 03:18:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:10.699 [2024-12-10 03:18:04.905293] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:10.699 [2024-12-10 03:18:04.905582] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83442 ] 00:30:10.699 [2024-12-10 03:18:05.060964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.959 [2024-12-10 03:18:05.136000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.333  [2024-12-10T03:18:07.288Z] Copying: 653/1024 [MB] (653 MBps) [2024-12-10T03:18:08.221Z] Copying: 1024/1024 [MB] (average 648 MBps) 00:30:13.833 00:30:13.833 03:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:13.833 03:18:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:15.766 03:18:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:15.766 03:18:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=8e6c79228dcdfef5cee7514c3b2be6f4 00:30:15.766 03:18:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:15.766 03:18:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:15.766 03:18:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:15.766 [2024-12-10 03:18:10.096751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.766 [2024-12-10 03:18:10.096796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:15.766 [2024-12-10 03:18:10.096808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:15.766 [2024-12-10 03:18:10.096814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.766 [2024-12-10 03:18:10.096833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.766 [2024-12-10 03:18:10.096843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:15.766 [2024-12-10 03:18:10.096849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:15.766 [2024-12-10 03:18:10.096855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.766 [2024-12-10 03:18:10.096870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.766 [2024-12-10 03:18:10.096876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:15.766 [2024-12-10 03:18:10.096882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:15.766 [2024-12-10 03:18:10.096888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.766 [2024-12-10 03:18:10.096937] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.177 ms, result 0 00:30:15.766 true 00:30:15.766 03:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:16.028 { 00:30:16.028 "name": "ftl", 00:30:16.028 "properties": [ 00:30:16.028 { 00:30:16.028 "name": "superblock_version", 00:30:16.028 "value": 5, 00:30:16.028 "read-only": true 00:30:16.028 }, 00:30:16.028 { 00:30:16.028 "name": "base_device", 00:30:16.028 "bands": [ 00:30:16.028 { 00:30:16.028 "id": 0, 00:30:16.028 "state": "FREE", 00:30:16.028 "validity": 0.0 00:30:16.028 }, 00:30:16.028 { 00:30:16.028 "id": 1, 00:30:16.028 "state": "FREE", 00:30:16.028 "validity": 0.0 00:30:16.028 }, 00:30:16.028 { 00:30:16.028 "id": 2, 00:30:16.028 "state": "FREE", 00:30:16.028 "validity": 0.0 00:30:16.028 }, 00:30:16.028 { 00:30:16.028 "id": 3, 00:30:16.028 "state": "FREE", 00:30:16.028 "validity": 0.0 00:30:16.028 }, 00:30:16.028 { 00:30:16.028 "id": 4, 00:30:16.028 "state": "FREE", 00:30:16.028 "validity": 0.0 00:30:16.028 }, 00:30:16.028 { 00:30:16.028 "id": 5, 00:30:16.028 "state": "FREE", 00:30:16.028 "validity": 0.0 00:30:16.028 }, 00:30:16.028 { 00:30:16.028 "id": 6, 00:30:16.028 "state": "FREE", 00:30:16.028 "validity": 0.0 00:30:16.028 }, 00:30:16.028 { 00:30:16.028 "id": 7, 00:30:16.028 "state": "FREE", 00:30:16.028 "validity": 0.0 00:30:16.028 }, 00:30:16.028 { 00:30:16.028 "id": 8, 00:30:16.028 "state": "FREE", 00:30:16.028 "validity": 0.0 00:30:16.028 }, 00:30:16.028 { 00:30:16.028 "id": 9, 00:30:16.028 "state": "FREE", 00:30:16.029 "validity": 0.0 00:30:16.029 }, 00:30:16.029 { 00:30:16.029 "id": 10, 00:30:16.029 "state": "FREE", 00:30:16.029 "validity": 0.0 00:30:16.029 }, 00:30:16.029 { 00:30:16.029 "id": 11, 00:30:16.029 "state": "FREE", 00:30:16.029 "validity": 0.0 00:30:16.029 }, 00:30:16.029 { 00:30:16.029 "id": 12, 00:30:16.029 "state": "FREE", 00:30:16.029 "validity": 0.0 00:30:16.029 }, 00:30:16.029 { 00:30:16.029 "id": 13, 00:30:16.029 "state": "FREE", 00:30:16.029 "validity": 0.0 00:30:16.029 }, 00:30:16.029 { 00:30:16.029 "id": 14, 00:30:16.029 "state": "FREE", 00:30:16.029 "validity": 0.0 00:30:16.029 }, 00:30:16.029 { 00:30:16.029 "id": 15, 00:30:16.029 "state": "FREE", 00:30:16.029 "validity": 0.0 00:30:16.029 }, 00:30:16.029 { 00:30:16.029 "id": 16, 00:30:16.029 "state": "FREE", 00:30:16.029 "validity": 0.0 00:30:16.029 }, 00:30:16.029 { 00:30:16.029 "id": 17, 00:30:16.029 "state": "FREE", 00:30:16.029 "validity": 0.0 00:30:16.029 } 00:30:16.029 ], 00:30:16.029 "read-only": true 00:30:16.029 }, 00:30:16.029 { 00:30:16.029 "name": "cache_device", 00:30:16.029 "type": "bdev", 00:30:16.029 "chunks": [ 00:30:16.029 { 00:30:16.029 "id": 0, 00:30:16.029 "state": "INACTIVE", 00:30:16.029 "utilization": 0.0 00:30:16.029 }, 00:30:16.029 { 00:30:16.029 "id": 1, 00:30:16.029 "state": "CLOSED", 00:30:16.029 "utilization": 1.0 00:30:16.029 }, 00:30:16.029 { 00:30:16.029 "id": 2, 00:30:16.029 "state": "CLOSED", 00:30:16.029 "utilization": 1.0 00:30:16.029 }, 00:30:16.029 { 00:30:16.029 "id": 3, 00:30:16.029 "state": "OPEN", 00:30:16.029 "utilization": 0.001953125 00:30:16.029 }, 00:30:16.029 { 00:30:16.029 "id": 4, 00:30:16.029 "state": "OPEN", 00:30:16.029 "utilization": 0.0 00:30:16.029 } 00:30:16.029 ], 00:30:16.029 "read-only": true 00:30:16.029 }, 00:30:16.029 { 00:30:16.029 "name": "verbose_mode", 00:30:16.029 "value": true, 00:30:16.029 "unit": "", 00:30:16.029 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:16.029 }, 00:30:16.029 { 00:30:16.029 "name": "prep_upgrade_on_shutdown", 00:30:16.029 "value": false, 00:30:16.029 "unit": "", 00:30:16.029 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:16.029 } 00:30:16.029 ] 00:30:16.029 } 00:30:16.029 03:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:16.288 [2024-12-10 03:18:10.501057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.288 [2024-12-10 03:18:10.501095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:16.288 [2024-12-10 03:18:10.501105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:16.288 [2024-12-10 03:18:10.501112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.288 [2024-12-10 03:18:10.501128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.288 [2024-12-10 03:18:10.501135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:16.288 [2024-12-10 03:18:10.501141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:16.288 [2024-12-10 03:18:10.501146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.288 [2024-12-10 03:18:10.501161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.288 [2024-12-10 03:18:10.501167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:16.288 [2024-12-10 03:18:10.501172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:16.288 [2024-12-10 03:18:10.501178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.288 [2024-12-10 03:18:10.501221] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.157 ms, result 0 00:30:16.288 true 00:30:16.288 03:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:16.288 03:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:16.288 03:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:16.548 03:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:16.548 03:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:16.548 03:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:16.548 [2024-12-10 03:18:10.909390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.548 [2024-12-10 03:18:10.909425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:16.548 [2024-12-10 03:18:10.909433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:16.548 [2024-12-10 03:18:10.909439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.548 [2024-12-10 03:18:10.909455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.548 [2024-12-10 03:18:10.909461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:16.548 [2024-12-10 03:18:10.909467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:16.548 [2024-12-10 03:18:10.909473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.548 [2024-12-10 03:18:10.909488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.548 [2024-12-10 03:18:10.909494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:16.548 [2024-12-10 03:18:10.909499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:16.548 [2024-12-10 03:18:10.909505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.548 [2024-12-10 03:18:10.909547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.163 ms, result 0 00:30:16.548 true 00:30:16.548 03:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:16.808 { 00:30:16.808 "name": "ftl", 00:30:16.808 "properties": [ 00:30:16.808 { 00:30:16.808 "name": "superblock_version", 00:30:16.808 "value": 5, 00:30:16.808 "read-only": true 00:30:16.808 }, 00:30:16.808 { 00:30:16.808 "name": "base_device", 00:30:16.808 "bands": [ 00:30:16.808 { 00:30:16.808 "id": 0, 00:30:16.808 "state": "FREE", 00:30:16.808 "validity": 0.0 00:30:16.808 }, 00:30:16.808 { 00:30:16.808 "id": 1, 00:30:16.808 "state": "FREE", 00:30:16.808 "validity": 0.0 00:30:16.808 }, 00:30:16.808 { 00:30:16.808 "id": 2, 00:30:16.808 "state": "FREE", 00:30:16.808 "validity": 0.0 00:30:16.808 }, 00:30:16.808 { 00:30:16.808 "id": 3, 00:30:16.808 "state": "FREE", 00:30:16.808 "validity": 0.0 00:30:16.808 }, 00:30:16.808 { 00:30:16.809 "id": 4, 00:30:16.809 "state": "FREE", 00:30:16.809 "validity": 0.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 5, 00:30:16.809 "state": "FREE", 00:30:16.809 "validity": 0.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 6, 00:30:16.809 "state": "FREE", 00:30:16.809 "validity": 0.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 7, 00:30:16.809 "state": "FREE", 00:30:16.809 "validity": 0.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 8, 00:30:16.809 "state": "FREE", 00:30:16.809 "validity": 0.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 9, 00:30:16.809 "state": "FREE", 00:30:16.809 "validity": 0.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 10, 00:30:16.809 "state": "FREE", 00:30:16.809 "validity": 0.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 11, 00:30:16.809 "state": "FREE", 00:30:16.809 "validity": 0.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 12, 00:30:16.809 "state": "FREE", 00:30:16.809 "validity": 0.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 13, 00:30:16.809 "state": "FREE", 00:30:16.809 "validity": 0.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 14, 00:30:16.809 "state": "FREE", 00:30:16.809 "validity": 0.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 15, 00:30:16.809 "state": "FREE", 00:30:16.809 "validity": 0.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 16, 00:30:16.809 "state": "FREE", 00:30:16.809 "validity": 0.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 17, 00:30:16.809 "state": "FREE", 00:30:16.809 "validity": 0.0 00:30:16.809 } 00:30:16.809 ], 00:30:16.809 "read-only": true 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "name": "cache_device", 00:30:16.809 "type": "bdev", 00:30:16.809 "chunks": [ 00:30:16.809 { 00:30:16.809 "id": 0, 00:30:16.809 "state": "INACTIVE", 00:30:16.809 "utilization": 0.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 1, 00:30:16.809 "state": "CLOSED", 00:30:16.809 "utilization": 1.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 2, 00:30:16.809 "state": "CLOSED", 00:30:16.809 "utilization": 1.0 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 3, 00:30:16.809 "state": "OPEN", 00:30:16.809 "utilization": 0.001953125 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "id": 4, 00:30:16.809 "state": "OPEN", 00:30:16.809 "utilization": 0.0 00:30:16.809 } 00:30:16.809 ], 00:30:16.809 "read-only": true 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "name": "verbose_mode", 00:30:16.809 "value": true, 00:30:16.809 "unit": "", 00:30:16.809 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:16.809 }, 00:30:16.809 { 00:30:16.809 "name": "prep_upgrade_on_shutdown", 00:30:16.809 "value": true, 00:30:16.809 "unit": "", 00:30:16.809 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:16.809 } 00:30:16.809 ] 00:30:16.809 } 00:30:16.809 03:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:16.809 03:18:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83106 ]] 00:30:16.809 03:18:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83106 00:30:16.809 03:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83106 ']' 00:30:16.809 03:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83106 00:30:16.809 03:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:16.809 03:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:16.809 03:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83106 00:30:16.809 03:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:16.809 03:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:16.809 killing process with pid 83106 00:30:16.809 03:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83106' 00:30:16.809 03:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83106 00:30:16.809 03:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83106 00:30:17.380 [2024-12-10 03:18:11.673834] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:17.380 [2024-12-10 03:18:11.683675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.380 [2024-12-10 03:18:11.683711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:17.380 [2024-12-10 03:18:11.683721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:17.380 [2024-12-10 03:18:11.683728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.380 [2024-12-10 03:18:11.683745] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:17.380 [2024-12-10 03:18:11.685839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.380 [2024-12-10 03:18:11.685866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:17.380 [2024-12-10 03:18:11.685875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.083 ms 00:30:17.380 [2024-12-10 03:18:11.685885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.512 [2024-12-10 03:18:19.199800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.512 [2024-12-10 03:18:19.199852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:25.512 [2024-12-10 03:18:19.199868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7513.871 ms 00:30:25.512 [2024-12-10 03:18:19.199875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.512 [2024-12-10 03:18:19.200825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.512 [2024-12-10 03:18:19.200843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:25.512 [2024-12-10 03:18:19.200852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.937 ms 00:30:25.512 [2024-12-10 03:18:19.200858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.512 [2024-12-10 03:18:19.201727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.512 [2024-12-10 03:18:19.201748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:25.512 [2024-12-10 03:18:19.201755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.848 ms 00:30:25.512 [2024-12-10 03:18:19.201765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.512 [2024-12-10 03:18:19.209361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.512 [2024-12-10 03:18:19.209493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:25.512 [2024-12-10 03:18:19.209505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.570 ms 00:30:25.512 [2024-12-10 03:18:19.209511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.512 [2024-12-10 03:18:19.214677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.512 [2024-12-10 03:18:19.214753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:25.512 [2024-12-10 03:18:19.214802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.143 ms 00:30:25.512 [2024-12-10 03:18:19.214819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.512 [2024-12-10 03:18:19.214889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.512 [2024-12-10 03:18:19.214965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:25.512 [2024-12-10 03:18:19.215012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:25.512 [2024-12-10 03:18:19.215027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.222360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.513 [2024-12-10 03:18:19.222501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:25.513 [2024-12-10 03:18:19.222547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.312 ms 00:30:25.513 [2024-12-10 03:18:19.222564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.229677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.513 [2024-12-10 03:18:19.229771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:25.513 [2024-12-10 03:18:19.229826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.082 ms 00:30:25.513 [2024-12-10 03:18:19.229843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.236919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.513 [2024-12-10 03:18:19.237002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:25.513 [2024-12-10 03:18:19.237040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.045 ms 00:30:25.513 [2024-12-10 03:18:19.237058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.244041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.513 [2024-12-10 03:18:19.244123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:25.513 [2024-12-10 03:18:19.244161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.931 ms 00:30:25.513 [2024-12-10 03:18:19.244177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.244206] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:25.513 [2024-12-10 03:18:19.244234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:25.513 [2024-12-10 03:18:19.244290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:25.513 [2024-12-10 03:18:19.244314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:25.513 [2024-12-10 03:18:19.244336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:25.513 [2024-12-10 03:18:19.244384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:25.513 [2024-12-10 03:18:19.244471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:25.513 [2024-12-10 03:18:19.244494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:25.513 [2024-12-10 03:18:19.244516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:25.513 [2024-12-10 03:18:19.244537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:25.513 [2024-12-10 03:18:19.244559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:25.513 [2024-12-10 03:18:19.244605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:25.513 [2024-12-10 03:18:19.244630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:25.513 [2024-12-10 03:18:19.244652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:25.513 [2024-12-10 03:18:19.244673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:25.513 [2024-12-10 03:18:19.244696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:25.513 [2024-12-10 03:18:19.244717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:25.513 [2024-12-10 03:18:19.244766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:25.513 [2024-12-10 03:18:19.244788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:25.513 [2024-12-10 03:18:19.244812] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:25.513 [2024-12-10 03:18:19.244827] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f683d4f5-edcb-4c8c-955d-dcedeabc8e49 00:30:25.513 [2024-12-10 03:18:19.244849] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:25.513 [2024-12-10 03:18:19.244863] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:25.513 [2024-12-10 03:18:19.244902] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:25.513 [2024-12-10 03:18:19.244919] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:25.513 [2024-12-10 03:18:19.244938] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:25.513 [2024-12-10 03:18:19.244953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:25.513 [2024-12-10 03:18:19.244969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:25.513 [2024-12-10 03:18:19.244983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:25.513 [2024-12-10 03:18:19.244996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:25.513 [2024-12-10 03:18:19.245010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.513 [2024-12-10 03:18:19.245049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:25.513 [2024-12-10 03:18:19.245067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.804 ms 00:30:25.513 [2024-12-10 03:18:19.245081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.254992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.513 [2024-12-10 03:18:19.255078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:25.513 [2024-12-10 03:18:19.255125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.888 ms 00:30:25.513 [2024-12-10 03:18:19.255142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.255429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.513 [2024-12-10 03:18:19.255484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:25.513 [2024-12-10 03:18:19.255521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:30:25.513 [2024-12-10 03:18:19.255538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.288483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.513 [2024-12-10 03:18:19.288578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:25.513 [2024-12-10 03:18:19.288617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.513 [2024-12-10 03:18:19.288635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.288665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.513 [2024-12-10 03:18:19.288681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:25.513 [2024-12-10 03:18:19.288696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.513 [2024-12-10 03:18:19.288710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.288767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.513 [2024-12-10 03:18:19.288786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:25.513 [2024-12-10 03:18:19.288806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.513 [2024-12-10 03:18:19.288848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.288871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.513 [2024-12-10 03:18:19.288916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:25.513 [2024-12-10 03:18:19.288934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.513 [2024-12-10 03:18:19.289019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.347147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.513 [2024-12-10 03:18:19.347265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:25.513 [2024-12-10 03:18:19.347308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.513 [2024-12-10 03:18:19.347325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.395880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.513 [2024-12-10 03:18:19.396009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:25.513 [2024-12-10 03:18:19.396047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.513 [2024-12-10 03:18:19.396064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.396137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.513 [2024-12-10 03:18:19.396156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:25.513 [2024-12-10 03:18:19.396171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.513 [2024-12-10 03:18:19.396190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.396231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.513 [2024-12-10 03:18:19.396248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:25.513 [2024-12-10 03:18:19.396264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.513 [2024-12-10 03:18:19.396303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.396404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.513 [2024-12-10 03:18:19.396425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:25.513 [2024-12-10 03:18:19.396440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.513 [2024-12-10 03:18:19.396453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.396494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.513 [2024-12-10 03:18:19.396513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:25.513 [2024-12-10 03:18:19.396529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.513 [2024-12-10 03:18:19.396543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.396580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.513 [2024-12-10 03:18:19.396668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:25.513 [2024-12-10 03:18:19.396683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.513 [2024-12-10 03:18:19.396698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.396742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.513 [2024-12-10 03:18:19.396760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:25.513 [2024-12-10 03:18:19.396775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.513 [2024-12-10 03:18:19.396789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.513 [2024-12-10 03:18:19.396890] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7713.170 ms, result 0 00:30:29.722 03:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:29.722 03:18:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:29.722 03:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:29.722 03:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:29.722 03:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:29.722 03:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83618 00:30:29.722 03:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:29.722 03:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:29.722 03:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83618 00:30:29.722 03:18:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83618 ']' 00:30:29.722 03:18:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.722 03:18:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:29.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.722 03:18:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.722 03:18:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:29.722 03:18:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:29.722 [2024-12-10 03:18:23.787949] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:29.722 [2024-12-10 03:18:23.788242] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83618 ] 00:30:29.722 [2024-12-10 03:18:23.945960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.722 [2024-12-10 03:18:24.030254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.295 [2024-12-10 03:18:24.601887] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:30.295 [2024-12-10 03:18:24.601940] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:30.557 [2024-12-10 03:18:24.744929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.557 [2024-12-10 03:18:24.744965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:30.557 [2024-12-10 03:18:24.744975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:30.557 [2024-12-10 03:18:24.744981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.557 [2024-12-10 03:18:24.745020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.557 [2024-12-10 03:18:24.745028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:30.557 [2024-12-10 03:18:24.745034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:30.557 [2024-12-10 03:18:24.745040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.557 [2024-12-10 03:18:24.745057] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:30.557 [2024-12-10 03:18:24.745597] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:30.557 [2024-12-10 03:18:24.745610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.557 [2024-12-10 03:18:24.745616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:30.557 [2024-12-10 03:18:24.745622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.559 ms 00:30:30.557 [2024-12-10 03:18:24.745628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.557 [2024-12-10 03:18:24.746558] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:30.557 [2024-12-10 03:18:24.756312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.557 [2024-12-10 03:18:24.756340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:30.557 [2024-12-10 03:18:24.756352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.756 ms 00:30:30.557 [2024-12-10 03:18:24.756359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.557 [2024-12-10 03:18:24.756419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.557 [2024-12-10 03:18:24.756428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:30.557 [2024-12-10 03:18:24.756434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:30.557 [2024-12-10 03:18:24.756440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.557 [2024-12-10 03:18:24.760801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.557 [2024-12-10 03:18:24.760825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:30.557 [2024-12-10 03:18:24.760832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.314 ms 00:30:30.557 [2024-12-10 03:18:24.760838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.557 [2024-12-10 03:18:24.760879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.557 [2024-12-10 03:18:24.760886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:30.557 [2024-12-10 03:18:24.760892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:30:30.557 [2024-12-10 03:18:24.760898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.557 [2024-12-10 03:18:24.760932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.557 [2024-12-10 03:18:24.760942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:30.557 [2024-12-10 03:18:24.760948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:30.557 [2024-12-10 03:18:24.760953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.557 [2024-12-10 03:18:24.760968] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:30.557 [2024-12-10 03:18:24.763558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.557 [2024-12-10 03:18:24.763581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:30.557 [2024-12-10 03:18:24.763589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.593 ms 00:30:30.557 [2024-12-10 03:18:24.763596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.557 [2024-12-10 03:18:24.763620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.557 [2024-12-10 03:18:24.763627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:30.557 [2024-12-10 03:18:24.763634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:30.557 [2024-12-10 03:18:24.763639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.557 [2024-12-10 03:18:24.763654] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:30.557 [2024-12-10 03:18:24.763670] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:30.557 [2024-12-10 03:18:24.763696] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:30.557 [2024-12-10 03:18:24.763707] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:30.557 [2024-12-10 03:18:24.763786] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:30.557 [2024-12-10 03:18:24.763794] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:30.557 [2024-12-10 03:18:24.763803] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:30.557 [2024-12-10 03:18:24.763810] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:30.557 [2024-12-10 03:18:24.763817] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:30.557 [2024-12-10 03:18:24.763825] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:30.557 [2024-12-10 03:18:24.763831] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:30.557 [2024-12-10 03:18:24.763836] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:30.557 [2024-12-10 03:18:24.763841] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:30.557 [2024-12-10 03:18:24.763847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.557 [2024-12-10 03:18:24.763852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:30.557 [2024-12-10 03:18:24.763858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:30:30.557 [2024-12-10 03:18:24.763863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.557 [2024-12-10 03:18:24.763935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.557 [2024-12-10 03:18:24.763942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:30.557 [2024-12-10 03:18:24.763950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:30:30.557 [2024-12-10 03:18:24.763955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.557 [2024-12-10 03:18:24.764031] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:30.557 [2024-12-10 03:18:24.764038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:30.557 [2024-12-10 03:18:24.764044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:30.557 [2024-12-10 03:18:24.764049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.557 [2024-12-10 03:18:24.764055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:30.557 [2024-12-10 03:18:24.764060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:30.558 [2024-12-10 03:18:24.764065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:30.558 [2024-12-10 03:18:24.764071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:30.558 [2024-12-10 03:18:24.764077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:30.558 [2024-12-10 03:18:24.764082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.558 [2024-12-10 03:18:24.764087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:30.558 [2024-12-10 03:18:24.764093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:30.558 [2024-12-10 03:18:24.764098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.558 [2024-12-10 03:18:24.764103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:30.558 [2024-12-10 03:18:24.764108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:30.558 [2024-12-10 03:18:24.764113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.558 [2024-12-10 03:18:24.764118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:30.558 [2024-12-10 03:18:24.764124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:30.558 [2024-12-10 03:18:24.764129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.558 [2024-12-10 03:18:24.764134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:30.558 [2024-12-10 03:18:24.764139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:30.558 [2024-12-10 03:18:24.764144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:30.558 [2024-12-10 03:18:24.764149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:30.558 [2024-12-10 03:18:24.764159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:30.558 [2024-12-10 03:18:24.764164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:30.558 [2024-12-10 03:18:24.764169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:30.558 [2024-12-10 03:18:24.764173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:30.558 [2024-12-10 03:18:24.764178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:30.558 [2024-12-10 03:18:24.764183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:30.558 [2024-12-10 03:18:24.764188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:30.558 [2024-12-10 03:18:24.764193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:30.558 [2024-12-10 03:18:24.764198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:30.558 [2024-12-10 03:18:24.764202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:30.558 [2024-12-10 03:18:24.764207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.558 [2024-12-10 03:18:24.764212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:30.558 [2024-12-10 03:18:24.764217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:30.558 [2024-12-10 03:18:24.764222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.558 [2024-12-10 03:18:24.764226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:30.558 [2024-12-10 03:18:24.764231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:30.558 [2024-12-10 03:18:24.764236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.558 [2024-12-10 03:18:24.764241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:30.558 [2024-12-10 03:18:24.764245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:30.558 [2024-12-10 03:18:24.764250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.558 [2024-12-10 03:18:24.764260] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:30.558 [2024-12-10 03:18:24.764266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:30.558 [2024-12-10 03:18:24.764275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:30.558 [2024-12-10 03:18:24.764280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.558 [2024-12-10 03:18:24.764287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:30.558 [2024-12-10 03:18:24.764293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:30.558 [2024-12-10 03:18:24.764297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:30.558 [2024-12-10 03:18:24.764302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:30.558 [2024-12-10 03:18:24.764307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:30.558 [2024-12-10 03:18:24.764312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:30.558 [2024-12-10 03:18:24.764319] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:30.558 [2024-12-10 03:18:24.764325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:30.558 [2024-12-10 03:18:24.764332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:30.558 [2024-12-10 03:18:24.764337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:30.558 [2024-12-10 03:18:24.764342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:30.558 [2024-12-10 03:18:24.764347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:30.558 [2024-12-10 03:18:24.764353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:30.558 [2024-12-10 03:18:24.764358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:30.558 [2024-12-10 03:18:24.764363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:30.558 [2024-12-10 03:18:24.764369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:30.558 [2024-12-10 03:18:24.764391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:30.558 [2024-12-10 03:18:24.764397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:30.558 [2024-12-10 03:18:24.764402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:30.558 [2024-12-10 03:18:24.764408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:30.558 [2024-12-10 03:18:24.764413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:30.558 [2024-12-10 03:18:24.764419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:30.558 [2024-12-10 03:18:24.764424] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:30.558 [2024-12-10 03:18:24.764430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:30.558 [2024-12-10 03:18:24.764436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:30.558 [2024-12-10 03:18:24.764442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:30.558 [2024-12-10 03:18:24.764447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:30.558 [2024-12-10 03:18:24.764453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:30.558 [2024-12-10 03:18:24.764461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.558 [2024-12-10 03:18:24.764467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:30.558 [2024-12-10 03:18:24.764473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.484 ms 00:30:30.558 [2024-12-10 03:18:24.764478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.558 [2024-12-10 03:18:24.764513] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:30.558 [2024-12-10 03:18:24.764521] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:34.770 [2024-12-10 03:18:29.037996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.770 [2024-12-10 03:18:29.038340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:34.770 [2024-12-10 03:18:29.038507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4273.464 ms 00:30:34.770 [2024-12-10 03:18:29.038537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.770 [2024-12-10 03:18:29.070113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.770 [2024-12-10 03:18:29.070363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:34.770 [2024-12-10 03:18:29.070536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.286 ms 00:30:34.770 [2024-12-10 03:18:29.070565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.770 [2024-12-10 03:18:29.070693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.770 [2024-12-10 03:18:29.070729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:34.770 [2024-12-10 03:18:29.070752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:34.770 [2024-12-10 03:18:29.070772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.770 [2024-12-10 03:18:29.105995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.770 [2024-12-10 03:18:29.106191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:34.770 [2024-12-10 03:18:29.106314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.164 ms 00:30:34.770 [2024-12-10 03:18:29.106341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.770 [2024-12-10 03:18:29.106419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.770 [2024-12-10 03:18:29.106445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:34.770 [2024-12-10 03:18:29.106466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:34.770 [2024-12-10 03:18:29.106487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.770 [2024-12-10 03:18:29.107051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.770 [2024-12-10 03:18:29.107139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:34.770 [2024-12-10 03:18:29.107470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.495 ms 00:30:34.770 [2024-12-10 03:18:29.107495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.770 [2024-12-10 03:18:29.107568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.770 [2024-12-10 03:18:29.107595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:34.770 [2024-12-10 03:18:29.107620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:34.770 [2024-12-10 03:18:29.107641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.770 [2024-12-10 03:18:29.125212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.770 [2024-12-10 03:18:29.125398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:34.770 [2024-12-10 03:18:29.125594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.536 ms 00:30:34.770 [2024-12-10 03:18:29.125607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.032 [2024-12-10 03:18:29.154262] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:35.032 [2024-12-10 03:18:29.154521] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:35.032 [2024-12-10 03:18:29.154548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.032 [2024-12-10 03:18:29.154559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:35.032 [2024-12-10 03:18:29.154573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.812 ms 00:30:35.032 [2024-12-10 03:18:29.154583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.032 [2024-12-10 03:18:29.169678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.032 [2024-12-10 03:18:29.169851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:35.032 [2024-12-10 03:18:29.169872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.037 ms 00:30:35.032 [2024-12-10 03:18:29.169881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.032 [2024-12-10 03:18:29.182333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.032 [2024-12-10 03:18:29.182395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:35.032 [2024-12-10 03:18:29.182409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.401 ms 00:30:35.032 [2024-12-10 03:18:29.182417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.032 [2024-12-10 03:18:29.194731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.032 [2024-12-10 03:18:29.194775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:35.032 [2024-12-10 03:18:29.194787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.266 ms 00:30:35.032 [2024-12-10 03:18:29.194795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.032 [2024-12-10 03:18:29.195460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.032 [2024-12-10 03:18:29.195485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:35.032 [2024-12-10 03:18:29.195495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.550 ms 00:30:35.032 [2024-12-10 03:18:29.195503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.032 [2024-12-10 03:18:29.261045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.032 [2024-12-10 03:18:29.261275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:35.032 [2024-12-10 03:18:29.261300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 65.518 ms 00:30:35.032 [2024-12-10 03:18:29.261310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.032 [2024-12-10 03:18:29.272962] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:35.032 [2024-12-10 03:18:29.274057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.032 [2024-12-10 03:18:29.274100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:35.032 [2024-12-10 03:18:29.274114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.692 ms 00:30:35.032 [2024-12-10 03:18:29.274122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.032 [2024-12-10 03:18:29.274243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.032 [2024-12-10 03:18:29.274258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:35.032 [2024-12-10 03:18:29.274270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:35.032 [2024-12-10 03:18:29.274278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.032 [2024-12-10 03:18:29.274344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.032 [2024-12-10 03:18:29.274355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:35.032 [2024-12-10 03:18:29.274365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:35.032 [2024-12-10 03:18:29.274373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.032 [2024-12-10 03:18:29.274418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.032 [2024-12-10 03:18:29.274428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:35.032 [2024-12-10 03:18:29.274440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:35.032 [2024-12-10 03:18:29.274449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.032 [2024-12-10 03:18:29.274488] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:35.032 [2024-12-10 03:18:29.274499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.032 [2024-12-10 03:18:29.274507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:35.032 [2024-12-10 03:18:29.274516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:35.032 [2024-12-10 03:18:29.274525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.032 [2024-12-10 03:18:29.299817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.032 [2024-12-10 03:18:29.300020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:35.032 [2024-12-10 03:18:29.300042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.271 ms 00:30:35.032 [2024-12-10 03:18:29.300051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.032 [2024-12-10 03:18:29.300133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.032 [2024-12-10 03:18:29.300143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:35.032 [2024-12-10 03:18:29.300153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:30:35.032 [2024-12-10 03:18:29.300161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.032 [2024-12-10 03:18:29.301664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4556.184 ms, result 0 00:30:35.032 [2024-12-10 03:18:29.316416] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.032 [2024-12-10 03:18:29.332411] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:35.032 [2024-12-10 03:18:29.340578] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:35.602 03:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.602 03:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:35.602 03:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:35.602 03:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:35.602 03:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:35.602 [2024-12-10 03:18:29.944958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.602 [2024-12-10 03:18:29.944996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:35.602 [2024-12-10 03:18:29.945011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:35.602 [2024-12-10 03:18:29.945020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.602 [2024-12-10 03:18:29.945041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.602 [2024-12-10 03:18:29.945049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:35.602 [2024-12-10 03:18:29.945057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:35.602 [2024-12-10 03:18:29.945064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.602 [2024-12-10 03:18:29.945083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.602 [2024-12-10 03:18:29.945091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:35.602 [2024-12-10 03:18:29.945099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:35.602 [2024-12-10 03:18:29.945106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.602 [2024-12-10 03:18:29.945161] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.196 ms, result 0 00:30:35.602 true 00:30:35.602 03:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:35.912 { 00:30:35.912 "name": "ftl", 00:30:35.912 "properties": [ 00:30:35.912 { 00:30:35.912 "name": "superblock_version", 00:30:35.912 "value": 5, 00:30:35.912 "read-only": true 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "name": "base_device", 00:30:35.912 "bands": [ 00:30:35.912 { 00:30:35.912 "id": 0, 00:30:35.912 "state": "CLOSED", 00:30:35.912 "validity": 1.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 1, 00:30:35.912 "state": "CLOSED", 00:30:35.912 "validity": 1.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 2, 00:30:35.912 "state": "CLOSED", 00:30:35.912 "validity": 0.007843137254901933 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 3, 00:30:35.912 "state": "FREE", 00:30:35.912 "validity": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 4, 00:30:35.912 "state": "FREE", 00:30:35.912 "validity": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 5, 00:30:35.912 "state": "FREE", 00:30:35.912 "validity": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 6, 00:30:35.912 "state": "FREE", 00:30:35.912 "validity": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 7, 00:30:35.912 "state": "FREE", 00:30:35.912 "validity": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 8, 00:30:35.912 "state": "FREE", 00:30:35.912 "validity": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 9, 00:30:35.912 "state": "FREE", 00:30:35.912 "validity": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 10, 00:30:35.912 "state": "FREE", 00:30:35.912 "validity": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 11, 00:30:35.912 "state": "FREE", 00:30:35.912 "validity": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 12, 00:30:35.912 "state": "FREE", 00:30:35.912 "validity": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 13, 00:30:35.912 "state": "FREE", 00:30:35.912 "validity": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 14, 00:30:35.912 "state": "FREE", 00:30:35.912 "validity": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 15, 00:30:35.912 "state": "FREE", 00:30:35.912 "validity": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 16, 00:30:35.912 "state": "FREE", 00:30:35.912 "validity": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 17, 00:30:35.912 "state": "FREE", 00:30:35.912 "validity": 0.0 00:30:35.912 } 00:30:35.912 ], 00:30:35.912 "read-only": true 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "name": "cache_device", 00:30:35.912 "type": "bdev", 00:30:35.912 "chunks": [ 00:30:35.912 { 00:30:35.912 "id": 0, 00:30:35.912 "state": "INACTIVE", 00:30:35.912 "utilization": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 1, 00:30:35.912 "state": "OPEN", 00:30:35.912 "utilization": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 2, 00:30:35.912 "state": "OPEN", 00:30:35.912 "utilization": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 3, 00:30:35.912 "state": "FREE", 00:30:35.912 "utilization": 0.0 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "id": 4, 00:30:35.912 "state": "FREE", 00:30:35.912 "utilization": 0.0 00:30:35.912 } 00:30:35.912 ], 00:30:35.912 "read-only": true 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "name": "verbose_mode", 00:30:35.912 "value": true, 00:30:35.912 "unit": "", 00:30:35.912 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:35.912 }, 00:30:35.912 { 00:30:35.912 "name": "prep_upgrade_on_shutdown", 00:30:35.912 "value": false, 00:30:35.912 "unit": "", 00:30:35.912 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:35.912 } 00:30:35.912 ] 00:30:35.912 } 00:30:35.912 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:35.912 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:35.912 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:36.183 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:36.184 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:36.184 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:36.184 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:36.184 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:36.442 Validate MD5 checksum, iteration 1 00:30:36.442 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:36.442 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:36.442 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:36.442 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:36.442 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:36.442 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:36.442 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:36.442 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:36.442 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:36.442 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:36.442 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:36.442 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:36.442 03:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:36.442 [2024-12-10 03:18:30.640849] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:36.442 [2024-12-10 03:18:30.641083] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83710 ] 00:30:36.442 [2024-12-10 03:18:30.800778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.702 [2024-12-10 03:18:30.894170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.088  [2024-12-10T03:18:33.418Z] Copying: 567/1024 [MB] (567 MBps) [2024-12-10T03:18:34.803Z] Copying: 1024/1024 [MB] (average 543 MBps) 00:30:40.415 00:30:40.415 03:18:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:40.415 03:18:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:42.947 03:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:42.947 Validate MD5 checksum, iteration 2 00:30:42.947 03:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a876db715c4e08991cfa7db56c6c39d3 00:30:42.947 03:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a876db715c4e08991cfa7db56c6c39d3 != \a\8\7\6\d\b\7\1\5\c\4\e\0\8\9\9\1\c\f\a\7\d\b\5\6\c\6\c\3\9\d\3 ]] 00:30:42.947 03:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:42.947 03:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:42.947 03:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:42.947 03:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:42.947 03:18:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:42.947 03:18:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:42.947 03:18:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:42.947 03:18:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:42.947 03:18:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:42.947 [2024-12-10 03:18:36.834503] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:42.947 [2024-12-10 03:18:36.834745] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83779 ] 00:30:42.947 [2024-12-10 03:18:36.991957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.947 [2024-12-10 03:18:37.086265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.329  [2024-12-10T03:18:39.289Z] Copying: 711/1024 [MB] (711 MBps) [2024-12-10T03:18:40.230Z] Copying: 1024/1024 [MB] (average 677 MBps) 00:30:45.842 00:30:45.842 03:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:45.842 03:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:47.757 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:47.757 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8e6c79228dcdfef5cee7514c3b2be6f4 00:30:47.757 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8e6c79228dcdfef5cee7514c3b2be6f4 != \8\e\6\c\7\9\2\2\8\d\c\d\f\e\f\5\c\e\e\7\5\1\4\c\3\b\2\b\e\6\f\4 ]] 00:30:47.757 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:47.757 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:47.757 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:47.757 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83618 ]] 00:30:47.757 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83618 00:30:47.757 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:47.757 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:47.757 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:47.757 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:47.757 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:48.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.018 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83836 00:30:48.018 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:48.018 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83836 00:30:48.018 03:18:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83836 ']' 00:30:48.018 03:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:48.018 03:18:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.018 03:18:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.018 03:18:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.018 03:18:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.018 03:18:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:48.018 [2024-12-10 03:18:42.216219] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:48.019 [2024-12-10 03:18:42.216346] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83836 ] 00:30:48.019 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83618 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:48.019 [2024-12-10 03:18:42.378803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.279 [2024-12-10 03:18:42.499883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.215 [2024-12-10 03:18:43.228338] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:49.215 [2024-12-10 03:18:43.228421] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:49.215 [2024-12-10 03:18:43.376694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.215 [2024-12-10 03:18:43.376735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:49.215 [2024-12-10 03:18:43.376748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:49.215 [2024-12-10 03:18:43.376755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.215 [2024-12-10 03:18:43.376804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.215 [2024-12-10 03:18:43.376813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:49.215 [2024-12-10 03:18:43.376822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:49.215 [2024-12-10 03:18:43.376829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.215 [2024-12-10 03:18:43.376850] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:49.215 [2024-12-10 03:18:43.377518] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:49.215 [2024-12-10 03:18:43.377533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.215 [2024-12-10 03:18:43.377540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:49.215 [2024-12-10 03:18:43.377548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.691 ms 00:30:49.215 [2024-12-10 03:18:43.377555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.215 [2024-12-10 03:18:43.377766] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:49.215 [2024-12-10 03:18:43.393952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.216 [2024-12-10 03:18:43.393986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:49.216 [2024-12-10 03:18:43.393997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.186 ms 00:30:49.216 [2024-12-10 03:18:43.394004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.216 [2024-12-10 03:18:43.402966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.216 [2024-12-10 03:18:43.402999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:49.216 [2024-12-10 03:18:43.403009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:49.216 [2024-12-10 03:18:43.403015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.216 [2024-12-10 03:18:43.403316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.216 [2024-12-10 03:18:43.403326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:49.216 [2024-12-10 03:18:43.403334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.229 ms 00:30:49.216 [2024-12-10 03:18:43.403341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.216 [2024-12-10 03:18:43.403415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.216 [2024-12-10 03:18:43.403426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:49.216 [2024-12-10 03:18:43.403434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:30:49.216 [2024-12-10 03:18:43.403441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.216 [2024-12-10 03:18:43.403463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.216 [2024-12-10 03:18:43.403470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:49.216 [2024-12-10 03:18:43.403478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:49.216 [2024-12-10 03:18:43.403485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.216 [2024-12-10 03:18:43.403504] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:49.216 [2024-12-10 03:18:43.406540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.216 [2024-12-10 03:18:43.406670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:49.216 [2024-12-10 03:18:43.406686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.040 ms 00:30:49.216 [2024-12-10 03:18:43.406693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.216 [2024-12-10 03:18:43.406728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.216 [2024-12-10 03:18:43.406737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:49.216 [2024-12-10 03:18:43.406745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:49.216 [2024-12-10 03:18:43.406751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.216 [2024-12-10 03:18:43.406770] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:49.216 [2024-12-10 03:18:43.406788] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:49.216 [2024-12-10 03:18:43.406821] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:49.216 [2024-12-10 03:18:43.406837] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:49.216 [2024-12-10 03:18:43.406938] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:49.216 [2024-12-10 03:18:43.406948] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:49.216 [2024-12-10 03:18:43.406958] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:49.216 [2024-12-10 03:18:43.406968] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:49.216 [2024-12-10 03:18:43.406976] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:49.216 [2024-12-10 03:18:43.406984] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:49.216 [2024-12-10 03:18:43.406991] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:49.216 [2024-12-10 03:18:43.406998] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:49.216 [2024-12-10 03:18:43.407005] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:49.216 [2024-12-10 03:18:43.407014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.216 [2024-12-10 03:18:43.407021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:49.216 [2024-12-10 03:18:43.407028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.246 ms 00:30:49.216 [2024-12-10 03:18:43.407035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.216 [2024-12-10 03:18:43.407118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.216 [2024-12-10 03:18:43.407125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:49.216 [2024-12-10 03:18:43.407132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:30:49.216 [2024-12-10 03:18:43.407139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.216 [2024-12-10 03:18:43.407248] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:49.216 [2024-12-10 03:18:43.407261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:49.216 [2024-12-10 03:18:43.407269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:49.216 [2024-12-10 03:18:43.407276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.216 [2024-12-10 03:18:43.407283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:49.216 [2024-12-10 03:18:43.407290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:49.216 [2024-12-10 03:18:43.407296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:49.216 [2024-12-10 03:18:43.407303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:49.216 [2024-12-10 03:18:43.407310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:49.216 [2024-12-10 03:18:43.407316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.216 [2024-12-10 03:18:43.407324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:49.216 [2024-12-10 03:18:43.407331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:49.216 [2024-12-10 03:18:43.407337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.216 [2024-12-10 03:18:43.407344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:49.216 [2024-12-10 03:18:43.407350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:49.216 [2024-12-10 03:18:43.407356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.216 [2024-12-10 03:18:43.407363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:49.216 [2024-12-10 03:18:43.407369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:49.216 [2024-12-10 03:18:43.407393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.216 [2024-12-10 03:18:43.407400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:49.216 [2024-12-10 03:18:43.407407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:49.216 [2024-12-10 03:18:43.407418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:49.216 [2024-12-10 03:18:43.407425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:49.216 [2024-12-10 03:18:43.407431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:49.216 [2024-12-10 03:18:43.407437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:49.216 [2024-12-10 03:18:43.407444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:49.216 [2024-12-10 03:18:43.407450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:49.216 [2024-12-10 03:18:43.407456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:49.216 [2024-12-10 03:18:43.407462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:49.216 [2024-12-10 03:18:43.407469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:49.216 [2024-12-10 03:18:43.407475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:49.216 [2024-12-10 03:18:43.407481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:49.216 [2024-12-10 03:18:43.407487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:49.216 [2024-12-10 03:18:43.407494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.216 [2024-12-10 03:18:43.407500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:49.216 [2024-12-10 03:18:43.407507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:49.216 [2024-12-10 03:18:43.407513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.216 [2024-12-10 03:18:43.407519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:49.216 [2024-12-10 03:18:43.407526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:49.216 [2024-12-10 03:18:43.407532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.216 [2024-12-10 03:18:43.407538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:49.216 [2024-12-10 03:18:43.407543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:49.216 [2024-12-10 03:18:43.407550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.216 [2024-12-10 03:18:43.407557] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:49.216 [2024-12-10 03:18:43.407564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:49.216 [2024-12-10 03:18:43.407571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:49.216 [2024-12-10 03:18:43.407578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.216 [2024-12-10 03:18:43.407585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:49.216 [2024-12-10 03:18:43.407592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:49.216 [2024-12-10 03:18:43.407598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:49.216 [2024-12-10 03:18:43.407605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:49.216 [2024-12-10 03:18:43.407611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:49.216 [2024-12-10 03:18:43.407617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:49.216 [2024-12-10 03:18:43.407625] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:49.216 [2024-12-10 03:18:43.407633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:49.216 [2024-12-10 03:18:43.407641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:49.216 [2024-12-10 03:18:43.407648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:49.216 [2024-12-10 03:18:43.407654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:49.216 [2024-12-10 03:18:43.407661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:49.217 [2024-12-10 03:18:43.407668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:49.217 [2024-12-10 03:18:43.407675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:49.217 [2024-12-10 03:18:43.407681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:49.217 [2024-12-10 03:18:43.407688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:49.217 [2024-12-10 03:18:43.407695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:49.217 [2024-12-10 03:18:43.407702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:49.217 [2024-12-10 03:18:43.407708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:49.217 [2024-12-10 03:18:43.407715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:49.217 [2024-12-10 03:18:43.407721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:49.217 [2024-12-10 03:18:43.407728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:49.217 [2024-12-10 03:18:43.407735] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:49.217 [2024-12-10 03:18:43.407742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:49.217 [2024-12-10 03:18:43.407752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:49.217 [2024-12-10 03:18:43.407759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:49.217 [2024-12-10 03:18:43.407765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:49.217 [2024-12-10 03:18:43.407773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:49.217 [2024-12-10 03:18:43.407781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.407788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:49.217 [2024-12-10 03:18:43.407795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.600 ms 00:30:49.217 [2024-12-10 03:18:43.407802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.431228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.431259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:49.217 [2024-12-10 03:18:43.431269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.379 ms 00:30:49.217 [2024-12-10 03:18:43.431276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.431308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.431316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:49.217 [2024-12-10 03:18:43.431324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:49.217 [2024-12-10 03:18:43.431331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.461515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.461641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:49.217 [2024-12-10 03:18:43.461656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.120 ms 00:30:49.217 [2024-12-10 03:18:43.461663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.461686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.461694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:49.217 [2024-12-10 03:18:43.461702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:49.217 [2024-12-10 03:18:43.461713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.461797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.461806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:49.217 [2024-12-10 03:18:43.461814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:49.217 [2024-12-10 03:18:43.461821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.461857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.461865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:49.217 [2024-12-10 03:18:43.461872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:49.217 [2024-12-10 03:18:43.461879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.476014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.476047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:49.217 [2024-12-10 03:18:43.476057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.111 ms 00:30:49.217 [2024-12-10 03:18:43.476064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.476170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.476181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:49.217 [2024-12-10 03:18:43.476190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:49.217 [2024-12-10 03:18:43.476197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.507285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.507325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:49.217 [2024-12-10 03:18:43.507337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.070 ms 00:30:49.217 [2024-12-10 03:18:43.507345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.516430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.516460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:49.217 [2024-12-10 03:18:43.516476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.519 ms 00:30:49.217 [2024-12-10 03:18:43.516484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.571695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.571739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:49.217 [2024-12-10 03:18:43.571751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.162 ms 00:30:49.217 [2024-12-10 03:18:43.571758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.571884] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:49.217 [2024-12-10 03:18:43.571987] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:49.217 [2024-12-10 03:18:43.572072] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:49.217 [2024-12-10 03:18:43.572162] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:49.217 [2024-12-10 03:18:43.572171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.572179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:49.217 [2024-12-10 03:18:43.572187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.372 ms 00:30:49.217 [2024-12-10 03:18:43.572194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.572244] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:49.217 [2024-12-10 03:18:43.572255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.572266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:49.217 [2024-12-10 03:18:43.572274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:49.217 [2024-12-10 03:18:43.572282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.586629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.586666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:49.217 [2024-12-10 03:18:43.586676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.326 ms 00:30:49.217 [2024-12-10 03:18:43.586684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.595052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.595083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:49.217 [2024-12-10 03:18:43.595093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:49.217 [2024-12-10 03:18:43.595101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.217 [2024-12-10 03:18:43.595179] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:49.217 [2024-12-10 03:18:43.595304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.217 [2024-12-10 03:18:43.595314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:49.217 [2024-12-10 03:18:43.595323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.126 ms 00:30:49.217 [2024-12-10 03:18:43.595329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.784 [2024-12-10 03:18:44.121180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.784 [2024-12-10 03:18:44.121241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:49.784 [2024-12-10 03:18:44.121256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 525.064 ms 00:30:49.784 [2024-12-10 03:18:44.121264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.784 [2024-12-10 03:18:44.125420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.784 [2024-12-10 03:18:44.125452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:49.784 [2024-12-10 03:18:44.125462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.244 ms 00:30:49.784 [2024-12-10 03:18:44.125470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.784 [2024-12-10 03:18:44.126158] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:49.784 [2024-12-10 03:18:44.126190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.784 [2024-12-10 03:18:44.126199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:49.784 [2024-12-10 03:18:44.126208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.689 ms 00:30:49.784 [2024-12-10 03:18:44.126215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.784 [2024-12-10 03:18:44.126243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.784 [2024-12-10 03:18:44.126252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:49.784 [2024-12-10 03:18:44.126260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:49.784 [2024-12-10 03:18:44.126272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.784 [2024-12-10 03:18:44.126318] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 531.137 ms, result 0 00:30:49.784 [2024-12-10 03:18:44.126353] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:49.784 [2024-12-10 03:18:44.126475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.784 [2024-12-10 03:18:44.126487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:49.784 [2024-12-10 03:18:44.126495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.123 ms 00:30:49.784 [2024-12-10 03:18:44.126502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.719 [2024-12-10 03:18:44.790631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.719 [2024-12-10 03:18:44.790681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:50.719 [2024-12-10 03:18:44.790703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 663.179 ms 00:30:50.719 [2024-12-10 03:18:44.790711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.719 [2024-12-10 03:18:44.794915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.719 [2024-12-10 03:18:44.794946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:50.719 [2024-12-10 03:18:44.794956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.269 ms 00:30:50.719 [2024-12-10 03:18:44.794963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.719 [2024-12-10 03:18:44.795589] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:50.719 [2024-12-10 03:18:44.795615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.719 [2024-12-10 03:18:44.795623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:50.719 [2024-12-10 03:18:44.795631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.627 ms 00:30:50.719 [2024-12-10 03:18:44.795638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.719 [2024-12-10 03:18:44.795713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.719 [2024-12-10 03:18:44.795723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:50.719 [2024-12-10 03:18:44.795731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:50.719 [2024-12-10 03:18:44.795738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.719 [2024-12-10 03:18:44.795771] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 669.412 ms, result 0 00:30:50.719 [2024-12-10 03:18:44.795810] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:50.719 [2024-12-10 03:18:44.795819] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:50.719 [2024-12-10 03:18:44.795829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.719 [2024-12-10 03:18:44.795836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:50.719 [2024-12-10 03:18:44.795844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1200.667 ms 00:30:50.719 [2024-12-10 03:18:44.795851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.719 [2024-12-10 03:18:44.795879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.719 [2024-12-10 03:18:44.795896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:50.719 [2024-12-10 03:18:44.795904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:50.719 [2024-12-10 03:18:44.795911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.719 [2024-12-10 03:18:44.806724] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:50.719 [2024-12-10 03:18:44.806823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.719 [2024-12-10 03:18:44.806832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:50.719 [2024-12-10 03:18:44.806842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.889 ms 00:30:50.719 [2024-12-10 03:18:44.806849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.719 [2024-12-10 03:18:44.807536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.719 [2024-12-10 03:18:44.807552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:50.719 [2024-12-10 03:18:44.807564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.626 ms 00:30:50.719 [2024-12-10 03:18:44.807571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.719 [2024-12-10 03:18:44.809801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.719 [2024-12-10 03:18:44.809821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:50.719 [2024-12-10 03:18:44.809831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.214 ms 00:30:50.719 [2024-12-10 03:18:44.809839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.719 [2024-12-10 03:18:44.809874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.719 [2024-12-10 03:18:44.809882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:50.719 [2024-12-10 03:18:44.809890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:50.719 [2024-12-10 03:18:44.809900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.719 [2024-12-10 03:18:44.809998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.719 [2024-12-10 03:18:44.810007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:50.719 [2024-12-10 03:18:44.810015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:50.719 [2024-12-10 03:18:44.810022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.719 [2024-12-10 03:18:44.810040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.719 [2024-12-10 03:18:44.810047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:50.720 [2024-12-10 03:18:44.810055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:50.720 [2024-12-10 03:18:44.810062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.720 [2024-12-10 03:18:44.810092] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:50.720 [2024-12-10 03:18:44.810101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.720 [2024-12-10 03:18:44.810108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:50.720 [2024-12-10 03:18:44.810116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:50.720 [2024-12-10 03:18:44.810123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.720 [2024-12-10 03:18:44.810171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.720 [2024-12-10 03:18:44.810180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:50.720 [2024-12-10 03:18:44.810188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:50.720 [2024-12-10 03:18:44.810195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.720 [2024-12-10 03:18:44.811066] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1433.964 ms, result 0 00:30:50.720 [2024-12-10 03:18:44.823436] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.720 [2024-12-10 03:18:44.839430] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:50.720 [2024-12-10 03:18:44.847560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:50.720 Validate MD5 checksum, iteration 1 00:30:50.720 03:18:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:50.720 03:18:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:50.720 03:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:50.720 03:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:50.720 03:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:50.720 03:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:50.720 03:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:50.720 03:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:50.720 03:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:50.720 03:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:50.720 03:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:50.720 03:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:50.720 03:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:50.720 03:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:50.720 03:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:50.720 [2024-12-10 03:18:44.949223] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:50.720 [2024-12-10 03:18:44.949490] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83871 ] 00:30:50.981 [2024-12-10 03:18:45.105754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.981 [2024-12-10 03:18:45.183788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.367  [2024-12-10T03:18:47.327Z] Copying: 686/1024 [MB] (686 MBps) [2024-12-10T03:18:48.267Z] Copying: 1024/1024 [MB] (average 675 MBps) 00:30:53.879 00:30:53.879 03:18:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:53.879 03:18:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:56.409 03:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:56.409 03:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a876db715c4e08991cfa7db56c6c39d3 00:30:56.409 03:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a876db715c4e08991cfa7db56c6c39d3 != \a\8\7\6\d\b\7\1\5\c\4\e\0\8\9\9\1\c\f\a\7\d\b\5\6\c\6\c\3\9\d\3 ]] 00:30:56.409 03:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:56.409 03:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:56.409 Validate MD5 checksum, iteration 2 00:30:56.409 03:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:56.409 03:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:56.409 03:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:56.409 03:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:56.409 03:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:56.409 03:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:56.409 03:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:56.409 [2024-12-10 03:18:50.281549] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:56.409 [2024-12-10 03:18:50.281793] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83932 ] 00:30:56.409 [2024-12-10 03:18:50.441983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.409 [2024-12-10 03:18:50.535605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.784  [2024-12-10T03:18:52.739Z] Copying: 678/1024 [MB] (678 MBps) [2024-12-10T03:18:54.641Z] Copying: 1024/1024 [MB] (average 681 MBps) 00:31:00.253 00:31:00.253 03:18:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:00.253 03:18:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8e6c79228dcdfef5cee7514c3b2be6f4 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8e6c79228dcdfef5cee7514c3b2be6f4 != \8\e\6\c\7\9\2\2\8\d\c\d\f\e\f\5\c\e\e\7\5\1\4\c\3\b\2\b\e\6\f\4 ]] 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83836 ]] 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83836 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83836 ']' 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83836 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83836 00:31:02.795 killing process with pid 83836 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83836' 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83836 00:31:02.795 03:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83836 00:31:03.056 [2024-12-10 03:18:57.261412] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:03.056 [2024-12-10 03:18:57.273659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.056 [2024-12-10 03:18:57.273695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:03.056 [2024-12-10 03:18:57.273705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:03.056 [2024-12-10 03:18:57.273712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.056 [2024-12-10 03:18:57.273739] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:03.056 [2024-12-10 03:18:57.275872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.056 [2024-12-10 03:18:57.275902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:03.056 [2024-12-10 03:18:57.275910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.121 ms 00:31:03.056 [2024-12-10 03:18:57.275916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.056 [2024-12-10 03:18:57.276092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.056 [2024-12-10 03:18:57.276099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:03.056 [2024-12-10 03:18:57.276106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.150 ms 00:31:03.056 [2024-12-10 03:18:57.276112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.056 [2024-12-10 03:18:57.277187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.056 [2024-12-10 03:18:57.277298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:03.056 [2024-12-10 03:18:57.277309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.065 ms 00:31:03.056 [2024-12-10 03:18:57.277320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.056 [2024-12-10 03:18:57.278202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.056 [2024-12-10 03:18:57.278215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:03.056 [2024-12-10 03:18:57.278222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.856 ms 00:31:03.056 [2024-12-10 03:18:57.278228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.056 [2024-12-10 03:18:57.285617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.056 [2024-12-10 03:18:57.285646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:03.056 [2024-12-10 03:18:57.285658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.355 ms 00:31:03.056 [2024-12-10 03:18:57.285664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.056 [2024-12-10 03:18:57.289561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.056 [2024-12-10 03:18:57.289588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:03.056 [2024-12-10 03:18:57.289596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.869 ms 00:31:03.056 [2024-12-10 03:18:57.289603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.056 [2024-12-10 03:18:57.289661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.056 [2024-12-10 03:18:57.289668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:03.056 [2024-12-10 03:18:57.289674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:03.056 [2024-12-10 03:18:57.289684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.056 [2024-12-10 03:18:57.297053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.056 [2024-12-10 03:18:57.297086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:03.056 [2024-12-10 03:18:57.297093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.357 ms 00:31:03.056 [2024-12-10 03:18:57.297098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.056 [2024-12-10 03:18:57.304100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.056 [2024-12-10 03:18:57.304209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:03.056 [2024-12-10 03:18:57.304220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.976 ms 00:31:03.056 [2024-12-10 03:18:57.304225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.056 [2024-12-10 03:18:57.311172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.056 [2024-12-10 03:18:57.311269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:03.056 [2024-12-10 03:18:57.311281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.922 ms 00:31:03.056 [2024-12-10 03:18:57.311286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.056 [2024-12-10 03:18:57.318150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.056 [2024-12-10 03:18:57.318247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:03.056 [2024-12-10 03:18:57.318257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.822 ms 00:31:03.056 [2024-12-10 03:18:57.318263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.056 [2024-12-10 03:18:57.318285] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:03.056 [2024-12-10 03:18:57.318296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:03.056 [2024-12-10 03:18:57.318304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:03.056 [2024-12-10 03:18:57.318309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:03.056 [2024-12-10 03:18:57.318316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:03.056 [2024-12-10 03:18:57.318321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:03.056 [2024-12-10 03:18:57.318327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:03.056 [2024-12-10 03:18:57.318334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:03.056 [2024-12-10 03:18:57.318339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:03.056 [2024-12-10 03:18:57.318345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:03.056 [2024-12-10 03:18:57.318351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:03.056 [2024-12-10 03:18:57.318357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:03.056 [2024-12-10 03:18:57.318362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:03.056 [2024-12-10 03:18:57.318368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:03.056 [2024-12-10 03:18:57.318373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:03.056 [2024-12-10 03:18:57.318394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:03.056 [2024-12-10 03:18:57.318400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:03.056 [2024-12-10 03:18:57.318406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:03.056 [2024-12-10 03:18:57.318412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:03.056 [2024-12-10 03:18:57.318419] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:03.056 [2024-12-10 03:18:57.318425] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f683d4f5-edcb-4c8c-955d-dcedeabc8e49 00:31:03.056 [2024-12-10 03:18:57.318431] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:03.056 [2024-12-10 03:18:57.318437] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:03.056 [2024-12-10 03:18:57.318442] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:03.056 [2024-12-10 03:18:57.318448] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:03.056 [2024-12-10 03:18:57.318453] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:03.056 [2024-12-10 03:18:57.318459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:03.056 [2024-12-10 03:18:57.318468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:03.056 [2024-12-10 03:18:57.318473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:03.056 [2024-12-10 03:18:57.318477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:03.056 [2024-12-10 03:18:57.318483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.056 [2024-12-10 03:18:57.318490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:03.056 [2024-12-10 03:18:57.318496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:31:03.056 [2024-12-10 03:18:57.318503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.056 [2024-12-10 03:18:57.328076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.056 [2024-12-10 03:18:57.328101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:03.057 [2024-12-10 03:18:57.328109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.552 ms 00:31:03.057 [2024-12-10 03:18:57.328115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.057 [2024-12-10 03:18:57.328392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.057 [2024-12-10 03:18:57.328403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:03.057 [2024-12-10 03:18:57.328410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.259 ms 00:31:03.057 [2024-12-10 03:18:57.328416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.057 [2024-12-10 03:18:57.361587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.057 [2024-12-10 03:18:57.361614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:03.057 [2024-12-10 03:18:57.361622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.057 [2024-12-10 03:18:57.361632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.057 [2024-12-10 03:18:57.361653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.057 [2024-12-10 03:18:57.361659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:03.057 [2024-12-10 03:18:57.361665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.057 [2024-12-10 03:18:57.361670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.057 [2024-12-10 03:18:57.361729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.057 [2024-12-10 03:18:57.361737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:03.057 [2024-12-10 03:18:57.361743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.057 [2024-12-10 03:18:57.361749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.057 [2024-12-10 03:18:57.361765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.057 [2024-12-10 03:18:57.361771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:03.057 [2024-12-10 03:18:57.361777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.057 [2024-12-10 03:18:57.361782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.057 [2024-12-10 03:18:57.420742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.057 [2024-12-10 03:18:57.420885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:03.057 [2024-12-10 03:18:57.420899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.057 [2024-12-10 03:18:57.420905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.317 [2024-12-10 03:18:57.469133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.317 [2024-12-10 03:18:57.469164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:03.317 [2024-12-10 03:18:57.469172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.317 [2024-12-10 03:18:57.469179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.317 [2024-12-10 03:18:57.469228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.317 [2024-12-10 03:18:57.469235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:03.318 [2024-12-10 03:18:57.469242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.318 [2024-12-10 03:18:57.469247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.318 [2024-12-10 03:18:57.469291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.318 [2024-12-10 03:18:57.469307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:03.318 [2024-12-10 03:18:57.469313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.318 [2024-12-10 03:18:57.469319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.318 [2024-12-10 03:18:57.469410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.318 [2024-12-10 03:18:57.469418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:03.318 [2024-12-10 03:18:57.469424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.318 [2024-12-10 03:18:57.469430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.318 [2024-12-10 03:18:57.469455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.318 [2024-12-10 03:18:57.469463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:03.318 [2024-12-10 03:18:57.469471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.318 [2024-12-10 03:18:57.469477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.318 [2024-12-10 03:18:57.469505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.318 [2024-12-10 03:18:57.469512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:03.318 [2024-12-10 03:18:57.469518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.318 [2024-12-10 03:18:57.469524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.318 [2024-12-10 03:18:57.469555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.318 [2024-12-10 03:18:57.469565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:03.318 [2024-12-10 03:18:57.469571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.318 [2024-12-10 03:18:57.469576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.318 [2024-12-10 03:18:57.469669] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 195.987 ms, result 0 00:31:03.889 03:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:03.890 03:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:03.890 03:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:03.890 03:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:03.890 03:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:03.890 03:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:03.890 Remove shared memory files 00:31:03.890 03:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:03.890 03:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:03.890 03:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:03.890 03:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:03.890 03:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83618 00:31:03.890 03:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:03.890 03:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:03.890 ************************************ 00:31:03.890 END TEST ftl_upgrade_shutdown 00:31:03.890 ************************************ 00:31:03.890 00:31:03.890 real 1m20.033s 00:31:03.890 user 1m51.971s 00:31:03.890 sys 0m17.615s 00:31:03.890 03:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.890 03:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:03.890 Process with pid 74998 is not found 00:31:03.890 03:18:58 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:31:03.890 03:18:58 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:03.890 03:18:58 ftl -- ftl/ftl.sh@14 -- # killprocess 74998 00:31:03.890 03:18:58 ftl -- common/autotest_common.sh@954 -- # '[' -z 74998 ']' 00:31:03.890 03:18:58 ftl -- common/autotest_common.sh@958 -- # kill -0 74998 00:31:03.890 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74998) - No such process 00:31:03.890 03:18:58 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 74998 is not found' 00:31:03.890 03:18:58 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:03.890 03:18:58 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84046 00:31:03.890 03:18:58 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:03.890 03:18:58 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84046 00:31:03.890 03:18:58 ftl -- common/autotest_common.sh@835 -- # '[' -z 84046 ']' 00:31:03.890 03:18:58 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.890 03:18:58 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:03.890 03:18:58 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.890 03:18:58 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:03.890 03:18:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:03.890 [2024-12-10 03:18:58.238460] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:31:03.890 [2024-12-10 03:18:58.238684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84046 ] 00:31:04.149 [2024-12-10 03:18:58.393866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.149 [2024-12-10 03:18:58.469487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.764 03:18:59 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:04.764 03:18:59 ftl -- common/autotest_common.sh@868 -- # return 0 00:31:04.764 03:18:59 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:05.048 nvme0n1 00:31:05.048 03:18:59 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:05.048 03:18:59 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:05.048 03:18:59 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:05.309 03:18:59 ftl -- ftl/common.sh@28 -- # stores=e86f0806-f682-4445-85fe-b26e3850e5ae 00:31:05.309 03:18:59 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:05.309 03:18:59 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e86f0806-f682-4445-85fe-b26e3850e5ae 00:31:05.570 03:18:59 ftl -- ftl/ftl.sh@23 -- # killprocess 84046 00:31:05.570 03:18:59 ftl -- common/autotest_common.sh@954 -- # '[' -z 84046 ']' 00:31:05.570 03:18:59 ftl -- common/autotest_common.sh@958 -- # kill -0 84046 00:31:05.570 03:18:59 ftl -- common/autotest_common.sh@959 -- # uname 00:31:05.570 03:18:59 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.570 03:18:59 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84046 00:31:05.570 killing process with pid 84046 00:31:05.570 03:18:59 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:05.570 03:18:59 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:05.570 03:18:59 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84046' 00:31:05.570 03:18:59 ftl -- common/autotest_common.sh@973 -- # kill 84046 00:31:05.570 03:18:59 ftl -- common/autotest_common.sh@978 -- # wait 84046 00:31:06.954 03:19:00 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:06.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:06.954 Waiting for block devices as requested 00:31:06.954 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:06.954 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:06.954 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:06.954 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:12.244 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:12.244 03:19:06 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:12.244 Remove shared memory files 00:31:12.244 03:19:06 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:12.244 03:19:06 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:12.244 03:19:06 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:12.244 03:19:06 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:12.244 03:19:06 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:12.244 03:19:06 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:12.244 ************************************ 00:31:12.244 END TEST ftl 00:31:12.244 ************************************ 00:31:12.244 00:31:12.244 real 13m25.026s 00:31:12.244 user 15m37.337s 00:31:12.244 sys 1m11.391s 00:31:12.244 03:19:06 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:12.244 03:19:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:12.244 03:19:06 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:12.244 03:19:06 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:12.244 03:19:06 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:12.244 03:19:06 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:12.244 03:19:06 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:12.244 03:19:06 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:12.244 03:19:06 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:12.244 03:19:06 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:12.244 03:19:06 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:12.244 03:19:06 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:12.244 03:19:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:12.244 03:19:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.244 03:19:06 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:12.244 03:19:06 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:12.244 03:19:06 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:12.244 03:19:06 -- common/autotest_common.sh@10 -- # set +x 00:31:13.630 INFO: APP EXITING 00:31:13.630 INFO: killing all VMs 00:31:13.630 INFO: killing vhost app 00:31:13.630 INFO: EXIT DONE 00:31:13.891 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:14.464 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:14.464 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:14.464 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:14.464 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:14.725 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:14.987 Cleaning 00:31:14.987 Removing: /var/run/dpdk/spdk0/config 00:31:14.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:14.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:14.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:14.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:14.987 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:14.987 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:15.249 Removing: /var/run/dpdk/spdk0 00:31:15.249 Removing: /var/run/dpdk/spdk_pid56899 00:31:15.249 Removing: /var/run/dpdk/spdk_pid57107 00:31:15.249 Removing: /var/run/dpdk/spdk_pid57319 00:31:15.249 Removing: /var/run/dpdk/spdk_pid57418 00:31:15.249 Removing: /var/run/dpdk/spdk_pid57463 00:31:15.249 Removing: /var/run/dpdk/spdk_pid57580 00:31:15.249 Removing: /var/run/dpdk/spdk_pid57598 00:31:15.249 Removing: /var/run/dpdk/spdk_pid57791 00:31:15.249 Removing: /var/run/dpdk/spdk_pid57878 00:31:15.249 Removing: /var/run/dpdk/spdk_pid57974 00:31:15.249 Removing: /var/run/dpdk/spdk_pid58079 00:31:15.249 Removing: /var/run/dpdk/spdk_pid58171 00:31:15.249 Removing: /var/run/dpdk/spdk_pid58216 00:31:15.249 Removing: /var/run/dpdk/spdk_pid58247 00:31:15.249 Removing: /var/run/dpdk/spdk_pid58323 00:31:15.249 Removing: /var/run/dpdk/spdk_pid58396 00:31:15.249 Removing: /var/run/dpdk/spdk_pid58833 00:31:15.249 Removing: /var/run/dpdk/spdk_pid58886 00:31:15.249 Removing: /var/run/dpdk/spdk_pid58949 00:31:15.249 Removing: /var/run/dpdk/spdk_pid58965 00:31:15.249 Removing: /var/run/dpdk/spdk_pid59056 00:31:15.249 Removing: /var/run/dpdk/spdk_pid59072 00:31:15.249 Removing: /var/run/dpdk/spdk_pid59163 00:31:15.249 Removing: /var/run/dpdk/spdk_pid59179 00:31:15.249 Removing: /var/run/dpdk/spdk_pid59243 00:31:15.249 Removing: /var/run/dpdk/spdk_pid59261 00:31:15.249 Removing: /var/run/dpdk/spdk_pid59314 00:31:15.249 Removing: /var/run/dpdk/spdk_pid59332 00:31:15.249 Removing: /var/run/dpdk/spdk_pid59516 00:31:15.249 Removing: /var/run/dpdk/spdk_pid59553 00:31:15.249 Removing: /var/run/dpdk/spdk_pid59642 00:31:15.249 Removing: /var/run/dpdk/spdk_pid59819 00:31:15.249 Removing: /var/run/dpdk/spdk_pid59898 00:31:15.249 Removing: /var/run/dpdk/spdk_pid59940 00:31:15.249 Removing: /var/run/dpdk/spdk_pid60361 00:31:15.249 Removing: /var/run/dpdk/spdk_pid60459 00:31:15.249 Removing: /var/run/dpdk/spdk_pid60570 00:31:15.249 Removing: /var/run/dpdk/spdk_pid60625 00:31:15.249 Removing: /var/run/dpdk/spdk_pid60645 00:31:15.249 Removing: /var/run/dpdk/spdk_pid60729 00:31:15.249 Removing: /var/run/dpdk/spdk_pid61347 00:31:15.249 Removing: /var/run/dpdk/spdk_pid61388 00:31:15.249 Removing: /var/run/dpdk/spdk_pid61863 00:31:15.249 Removing: /var/run/dpdk/spdk_pid61956 00:31:15.249 Removing: /var/run/dpdk/spdk_pid62065 00:31:15.249 Removing: /var/run/dpdk/spdk_pid62118 00:31:15.249 Removing: /var/run/dpdk/spdk_pid62148 00:31:15.249 Removing: /var/run/dpdk/spdk_pid62169 00:31:15.249 Removing: /var/run/dpdk/spdk_pid64005 00:31:15.249 Removing: /var/run/dpdk/spdk_pid64140 00:31:15.249 Removing: /var/run/dpdk/spdk_pid64144 00:31:15.249 Removing: /var/run/dpdk/spdk_pid64162 00:31:15.249 Removing: /var/run/dpdk/spdk_pid64209 00:31:15.249 Removing: /var/run/dpdk/spdk_pid64213 00:31:15.249 Removing: /var/run/dpdk/spdk_pid64225 00:31:15.249 Removing: /var/run/dpdk/spdk_pid64271 00:31:15.249 Removing: /var/run/dpdk/spdk_pid64275 00:31:15.249 Removing: /var/run/dpdk/spdk_pid64287 00:31:15.249 Removing: /var/run/dpdk/spdk_pid64332 00:31:15.249 Removing: /var/run/dpdk/spdk_pid64336 00:31:15.249 Removing: /var/run/dpdk/spdk_pid64348 00:31:15.249 Removing: /var/run/dpdk/spdk_pid65732 00:31:15.249 Removing: /var/run/dpdk/spdk_pid65829 00:31:15.249 Removing: /var/run/dpdk/spdk_pid67230 00:31:15.249 Removing: /var/run/dpdk/spdk_pid68992 00:31:15.249 Removing: /var/run/dpdk/spdk_pid69061 00:31:15.249 Removing: /var/run/dpdk/spdk_pid69136 00:31:15.249 Removing: /var/run/dpdk/spdk_pid69246 00:31:15.249 Removing: /var/run/dpdk/spdk_pid69337 00:31:15.249 Removing: /var/run/dpdk/spdk_pid69434 00:31:15.249 Removing: /var/run/dpdk/spdk_pid69508 00:31:15.249 Removing: /var/run/dpdk/spdk_pid69583 00:31:15.249 Removing: /var/run/dpdk/spdk_pid69693 00:31:15.249 Removing: /var/run/dpdk/spdk_pid69785 00:31:15.249 Removing: /var/run/dpdk/spdk_pid69875 00:31:15.249 Removing: /var/run/dpdk/spdk_pid69949 00:31:15.249 Removing: /var/run/dpdk/spdk_pid70030 00:31:15.249 Removing: /var/run/dpdk/spdk_pid70134 00:31:15.249 Removing: /var/run/dpdk/spdk_pid70226 00:31:15.249 Removing: /var/run/dpdk/spdk_pid70316 00:31:15.249 Removing: /var/run/dpdk/spdk_pid70390 00:31:15.249 Removing: /var/run/dpdk/spdk_pid70471 00:31:15.249 Removing: /var/run/dpdk/spdk_pid70575 00:31:15.249 Removing: /var/run/dpdk/spdk_pid70667 00:31:15.249 Removing: /var/run/dpdk/spdk_pid70762 00:31:15.249 Removing: /var/run/dpdk/spdk_pid70831 00:31:15.249 Removing: /var/run/dpdk/spdk_pid70911 00:31:15.249 Removing: /var/run/dpdk/spdk_pid70985 00:31:15.249 Removing: /var/run/dpdk/spdk_pid71059 00:31:15.249 Removing: /var/run/dpdk/spdk_pid71161 00:31:15.249 Removing: /var/run/dpdk/spdk_pid71253 00:31:15.249 Removing: /var/run/dpdk/spdk_pid71348 00:31:15.249 Removing: /var/run/dpdk/spdk_pid71422 00:31:15.249 Removing: /var/run/dpdk/spdk_pid71493 00:31:15.249 Removing: /var/run/dpdk/spdk_pid71567 00:31:15.249 Removing: /var/run/dpdk/spdk_pid71641 00:31:15.249 Removing: /var/run/dpdk/spdk_pid71750 00:31:15.249 Removing: /var/run/dpdk/spdk_pid71835 00:31:15.249 Removing: /var/run/dpdk/spdk_pid71979 00:31:15.249 Removing: /var/run/dpdk/spdk_pid72264 00:31:15.249 Removing: /var/run/dpdk/spdk_pid72301 00:31:15.249 Removing: /var/run/dpdk/spdk_pid72747 00:31:15.249 Removing: /var/run/dpdk/spdk_pid72934 00:31:15.249 Removing: /var/run/dpdk/spdk_pid73036 00:31:15.249 Removing: /var/run/dpdk/spdk_pid73146 00:31:15.512 Removing: /var/run/dpdk/spdk_pid73195 00:31:15.512 Removing: /var/run/dpdk/spdk_pid73220 00:31:15.512 Removing: /var/run/dpdk/spdk_pid73527 00:31:15.512 Removing: /var/run/dpdk/spdk_pid73576 00:31:15.512 Removing: /var/run/dpdk/spdk_pid73650 00:31:15.512 Removing: /var/run/dpdk/spdk_pid74051 00:31:15.512 Removing: /var/run/dpdk/spdk_pid74197 00:31:15.512 Removing: /var/run/dpdk/spdk_pid74998 00:31:15.512 Removing: /var/run/dpdk/spdk_pid75131 00:31:15.512 Removing: /var/run/dpdk/spdk_pid75311 00:31:15.512 Removing: /var/run/dpdk/spdk_pid75408 00:31:15.512 Removing: /var/run/dpdk/spdk_pid75722 00:31:15.512 Removing: /var/run/dpdk/spdk_pid75981 00:31:15.512 Removing: /var/run/dpdk/spdk_pid76335 00:31:15.512 Removing: /var/run/dpdk/spdk_pid76523 00:31:15.512 Removing: /var/run/dpdk/spdk_pid76690 00:31:15.512 Removing: /var/run/dpdk/spdk_pid76751 00:31:15.512 Removing: /var/run/dpdk/spdk_pid76945 00:31:15.512 Removing: /var/run/dpdk/spdk_pid76981 00:31:15.512 Removing: /var/run/dpdk/spdk_pid77028 00:31:15.512 Removing: /var/run/dpdk/spdk_pid77270 00:31:15.512 Removing: /var/run/dpdk/spdk_pid77506 00:31:15.512 Removing: /var/run/dpdk/spdk_pid78204 00:31:15.512 Removing: /var/run/dpdk/spdk_pid78927 00:31:15.512 Removing: /var/run/dpdk/spdk_pid79670 00:31:15.512 Removing: /var/run/dpdk/spdk_pid80469 00:31:15.512 Removing: /var/run/dpdk/spdk_pid80605 00:31:15.512 Removing: /var/run/dpdk/spdk_pid80688 00:31:15.512 Removing: /var/run/dpdk/spdk_pid81101 00:31:15.512 Removing: /var/run/dpdk/spdk_pid81155 00:31:15.512 Removing: /var/run/dpdk/spdk_pid81779 00:31:15.512 Removing: /var/run/dpdk/spdk_pid82258 00:31:15.512 Removing: /var/run/dpdk/spdk_pid83106 00:31:15.512 Removing: /var/run/dpdk/spdk_pid83228 00:31:15.512 Removing: /var/run/dpdk/spdk_pid83270 00:31:15.512 Removing: /var/run/dpdk/spdk_pid83330 00:31:15.512 Removing: /var/run/dpdk/spdk_pid83379 00:31:15.512 Removing: /var/run/dpdk/spdk_pid83442 00:31:15.512 Removing: /var/run/dpdk/spdk_pid83618 00:31:15.512 Removing: /var/run/dpdk/spdk_pid83710 00:31:15.512 Removing: /var/run/dpdk/spdk_pid83779 00:31:15.512 Removing: /var/run/dpdk/spdk_pid83836 00:31:15.512 Removing: /var/run/dpdk/spdk_pid83871 00:31:15.512 Removing: /var/run/dpdk/spdk_pid83932 00:31:15.512 Removing: /var/run/dpdk/spdk_pid84046 00:31:15.512 Clean 00:31:15.512 03:19:09 -- common/autotest_common.sh@1453 -- # return 0 00:31:15.512 03:19:09 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:31:15.512 03:19:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:15.512 03:19:09 -- common/autotest_common.sh@10 -- # set +x 00:31:15.512 03:19:09 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:31:15.512 03:19:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:15.512 03:19:09 -- common/autotest_common.sh@10 -- # set +x 00:31:15.512 03:19:09 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:15.512 03:19:09 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:15.512 03:19:09 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:15.773 03:19:09 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:31:15.773 03:19:09 -- spdk/autotest.sh@398 -- # hostname 00:31:15.773 03:19:09 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:15.773 geninfo: WARNING: invalid characters removed from testname! 00:31:42.365 03:19:34 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:44.267 03:19:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:46.173 03:19:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:47.556 03:19:41 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:50.100 03:19:44 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:52.017 03:19:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:54.562 03:19:48 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:54.562 03:19:48 -- spdk/autorun.sh@1 -- $ timing_finish 00:31:54.562 03:19:48 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:31:54.562 03:19:48 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:54.562 03:19:48 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:31:54.562 03:19:48 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:54.562 + [[ -n 5015 ]] 00:31:54.562 + sudo kill 5015 00:31:54.573 [Pipeline] } 00:31:54.589 [Pipeline] // timeout 00:31:54.594 [Pipeline] } 00:31:54.608 [Pipeline] // stage 00:31:54.613 [Pipeline] } 00:31:54.627 [Pipeline] // catchError 00:31:54.637 [Pipeline] stage 00:31:54.640 [Pipeline] { (Stop VM) 00:31:54.652 [Pipeline] sh 00:31:54.938 + vagrant halt 00:31:57.518 ==> default: Halting domain... 00:32:01.740 [Pipeline] sh 00:32:02.025 + vagrant destroy -f 00:32:04.570 ==> default: Removing domain... 00:32:05.147 [Pipeline] sh 00:32:05.426 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:32:05.434 [Pipeline] } 00:32:05.448 [Pipeline] // stage 00:32:05.454 [Pipeline] } 00:32:05.468 [Pipeline] // dir 00:32:05.473 [Pipeline] } 00:32:05.487 [Pipeline] // wrap 00:32:05.493 [Pipeline] } 00:32:05.505 [Pipeline] // catchError 00:32:05.514 [Pipeline] stage 00:32:05.516 [Pipeline] { (Epilogue) 00:32:05.528 [Pipeline] sh 00:32:05.807 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:11.092 [Pipeline] catchError 00:32:11.093 [Pipeline] { 00:32:11.102 [Pipeline] sh 00:32:11.384 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:11.384 Artifacts sizes are good 00:32:11.394 [Pipeline] } 00:32:11.408 [Pipeline] // catchError 00:32:11.418 [Pipeline] archiveArtifacts 00:32:11.425 Archiving artifacts 00:32:11.519 [Pipeline] cleanWs 00:32:11.532 [WS-CLEANUP] Deleting project workspace... 00:32:11.532 [WS-CLEANUP] Deferred wipeout is used... 00:32:11.539 [WS-CLEANUP] done 00:32:11.541 [Pipeline] } 00:32:11.556 [Pipeline] // stage 00:32:11.561 [Pipeline] } 00:32:11.574 [Pipeline] // node 00:32:11.579 [Pipeline] End of Pipeline 00:32:11.640 Finished: SUCCESS